text
stringlengths
59
500k
subset
stringclasses
6 values
ETOOBUSY 🚀 minimal blogging for the impatient Allocating games in tournaments Apr 14, 2020 · #algorithm #game #maths #boardgamearena #Tournaments games allocation This post is a part of the "Tournaments games allocation" series: 1. Allocating games in tournaments 2. Allocating games in tournaments - example 3. Allocating games in tournaments - premium games and players 4. Allocating games in tournaments - 3 players practicalities 5. Allocating games in tournaments - 6 players matches 6. Allocating games in tournaments - 6 players matches, again 7. Allocating games in tournaments - 6 players matches, premium 8. Allocating games in tournaments - a program 9. Torneo - a tournament management system I've become curious about how to organize tournaments (in the sense of allocating games in it) when games have more than two players. In these days of Coronavirus, there's been a surge in online playing, and board games make no exception. In particular, BoardGameArena is a very nice place to play online, and they saw about a 6x increase in their traffic (which gave them a few issues…). One thing that always tickled me is that there is a tournament system. Alas, as it is today their system only caters for two-players games, i.e. even when games would allow for additional players, instances in tournaments only allow two. This is sub-optimal in a lot of games that I like (e.g. Tokaido) and that are better played with three or more players. Two-players tournaments are easy… … and there are a lot of ways to set them up. One of the easiest ways is to get a number of players that is a power of 2, then half them at each round with direct eliminations. One consequence of direct-elimination matches is that half of the people will play only a single game in the tournament. Which might be good for the competition, less for people who wants to play 🤨 In the two-players space the answer to this issue is round robin tournaments, in which each participant plays against each other. This is the same as, for example, sport leagues (although often in this case they play two matches against each other participant). Direct elimination for more players? One easy extension is to organize $N$-players games in direct-elimination matches, assuming that there are $N^p$ players. This means having $p$ rounds. Alas, this has the same drawback of the direct elimination tournaments for two-players games, with the additional negative aspects that: at each round, more people end playing at the same time ($\frac{2}{3}$ for three-players games, $\frac{3}{4}$ for four-players games, and so on…) more players are needed for the same number of rounds (e.g. a three-rounds tournament for two-players games would require $2^3=8$ participants, for three-players games would require $3^3=27$ participants, i.e. more than three times). One possibility is to let more players through each round. For example, letting two players pass to the next round in four-player games basically means keeping the same structure as in two-player direct-elimination tournaments: 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 2-1 2-2 2-3 2-4 At any rate, this does not solve the need for people to play more! k-players leagues? Another solution would be to find a way to extend the "league" approach to $k$-players games (out of a total population of $n$ players participating in the tournament). One straightforward way to do this is to form all $n \choose k$ sub-sets of $k$ players out of all $n$ participants, where each of them will play in ${n-1} \choose {k-1}$ matches. This might mean a few too many matches though: a tournament with 8 players overall and 4-player matches would mean 70 matches overall, where each player competes in 35 of them. Ooops. One observation is that many of those matches are… redundant. From the example with 4-players matches out of 8 total participants, we have the following matches (among others): They are somehow… pretty similar, in that players 1, 2, and 3 are playing five matches with each other inside. Hence, while it's interesting that each participant plays against every other one at some time, we can probably remove a lot of the redundant games and enjoy the tournament. In general, we should aim for an arrangement where we have a limit on how many times the same people play at the same table. One answer to the challenge in the previous section is to leverage Block Designs. This is the definition for t-designs (slightly adapted): Given a finite set $X$ (of $v$ elements called points) and integers $t$, $k$, $r$, $\lambda \geq 1$, we define a $t$-design $B$ to be a family of $k$-element subsets of $X$, called blocks, such that any $x$ in $X$ is contained in $r$ blocks, and any subset of $t$ of distinct points is contained in $\lambda$ blocks. The number of elements in family $B$ is $b$. Uh? Translated in tournamentese: we have a set $X$ of $v$ participants to the tournament; we want to organize matches with $k$ players inside; each player competes in $r$ matches; we want that $t$ players compete in exactly $\lambda$ matches in which all are present at the same time. In case we want to limit the number of times pairs of players compete at the same table, we set $t = 2$, deal with 2-designs, and call them BIBD (Balanced Incomplete Block Designs). Actually, block design usually refers to 2-designs, and we will stick with them in the following. Easy right? Next problem pleaaaaase! Well, not so fast. Block Designs for arranging matches? There are several different ways to create BIBDs, not all totally amenable for tabletop games. For example, consider the BIBD induced by the Fano plane (numbers are player identifiers, from 0 up to 6): It might be applicable to a tournament of 7 players for games that accept 3 players at a time. Everyone plays against anybody else, but only once. Yay! There is a big defect though. As any other block design based on finite projective planes, it has the characteristic that any two blocks always share exactly one point. Which means: if you want to limit the number of games that are played at the same time, e.g. so that each player is only on a single table at the same time (like a real-time table would more or less require), then you can only play a very limited number of games at the same time. In particular, if you limit to one game per player at most, at any time only a single game can run, where three participants play and four wait. Finite affine planes to the rescue On the other hand, BIBDs induced by finite affine plaines do not have this limitation. It's easy to get one from a finite projective plane: just get rid of a block and all items in it, and what you're left with is a finite affine plane. In the Fano plane case, let's get rid of the group (0, 5, 6), as well as players 0, 5, and 6: (1, 3, 5) -> (1, 3) (0, 5, 6) -> This should look familiar: it's basically a round-robin arrangement for four players, which allows two real-time games to go on at the same time, for a total of 3 rounds: 1st round: (1, 2) (3, 4) 2nd round: (1, 3) (2, 4) 3rd round: (1, 4) (2, 3) This is quite amenable now: everybody plays against each other, but no more than once everybody plays at each round everybody plays a reasonable amount of games Alternative paths of investigation Another promising path for investigating is to explore the so-called Social Golfer Problem (see also here), which is formulated as follows: The task is to schedule $g \cdot p$ golfers in $g$ groups of $p$ players for $w$ weeks such that no two golfers play in the same group more than once. The goal is to find the minimum number of weeks $w$ where this can happen, and I'm also not sure at the moment if this also requires that every golfer still plays with every other one… but it seems promising to find out something to code in the future, and also to overcome some rigidity in the schemas that we will investigate in the short time. More? Comments? If you want to take a look at all posts, here's the list: Allocating games in tournaments - example Allocating games in tournaments - premium games and players Allocating games in tournaments - 3 players practicalities Allocating games in tournaments - 6 players matches Allocating games in tournaments - 6 players matches, again Allocating games in tournaments - 6 players matches, premium Allocating games in tournaments - a program Comments? Octodon, Twitter, GitHub, Reddit, or drop me a line! ETOOBUSY © 2023 by polettix ― Powered by Jekyll and the TextLog theme
CommonCrawl
Are there any specific problems known to be undecidable for reasons other than diagonalization, self-reference, or reducibility? Every undecidable problem that I know of falls into one of the following categories: Problems that are undecidable because of diagonalization (indirect self-reference). These problems, like the halting problem, are undecidable because you could use a purported decider for the language to construct a TM whose behavior leads to a contradiction. You could also lump many undecidable problems about Kolmogorov complexity into this camp. Problems that are undecidable due to direct self-reference. For example, the universal language can be shown to be undecidable for the following reason: if it were decidable, then it would be possible to use Kleene's recursion theorem to build a TM that gets its own encoding, ask whether it will accept its own input, then does the opposite. Problems that are undecidable due to reductions from existing undecidable problems. Good examples here include the Post Correspondence Problem (reduction from the halting problem) and the Entscheidungsproblem. When I teach computability theory to my students, many students pick up on this as well and often ask me if there are any problems we can prove are undecidable without ultimately tracing back to some kind of self-reference trickery. I can prove nonconstructively that there are infinitely many undecidable problems by a simple cardinality argument relating the number of TMs to the number of languages, but this doesn't give a specific example of an undecidable language. Are there any languages known to be undecidable for reasons that aren't listed above? If so, what are they and what techniques were used to show their undecidability? computability proof-techniques undecidability Raphael♦ templatetypedeftemplatetypedef $\begingroup$ @EvilJS My understanding was that the undecidability proof there involved the ability to simulate TMs, though perhaps I'm mistaken? $\endgroup$ – templatetypedef Dec 26 '15 at 22:28 $\begingroup$ You can say Rice's theorem might not fit into any of these categories, but the proof of the theorem does. $\endgroup$ – Ryan Dec 27 '15 at 2:07 $\begingroup$ @EvilJS That's a good point. Really, what I'm looking for here is whether there is some fundamentally different technique we can use. It would be nice, for example, if someone identified a problem as undecidable in a case where that problem has no known relation to TM self-reference or a Godeling-type argument. If the best we can do is "we figured this one out a long time ago, then realized that it's easier to prove it another way," that in a sense would be an answer - the three techniques above fundamentally account for all the proofs of undecidability we know of. $\endgroup$ – templatetypedef Dec 28 '15 at 22:22 $\begingroup$ The busy beaver function grows too fast for any program to compute. Concretely, you can define a function $f(n)$ as one plus the largest number computed by a program of length at most $n$. Does that count as diagonalization? $\endgroup$ – Yuval Filmus Dec 29 '15 at 20:29 $\begingroup$ @YuvalFilmus Perhaps I'm being too strict here, but that sounds like a diagonal-type argument to me: you're constructing a function that is defined to be different from all functions computed by TMs. $\endgroup$ – templatetypedef Dec 29 '15 at 20:48 Yes, there are such proofs. They are based on the Low Basis Theorem. See this answer to Are there any proofs the undecidability of the halting problem that does not depend on self-referencing or diagonalization? question on cstheory for more. KavehKaveh $\begingroup$ If anyone is interested in advanced techniques in computability theory then check out Robert I. Soare's books Recursively Enumerable Sets and Degrees and Computability Theory and Applications. $\endgroup$ – Kaveh Jan 3 '16 at 10:42 $\begingroup$ Correct me if I'm wrong, but doesn't the proof of the low basis theorem involve applying a functional to itself and asking whether it doesn't produce a value? If so, isn't this just a layer of indirection on top of a diagonal argument? $\endgroup$ – templatetypedef Jan 3 '16 at 18:58 $\begingroup$ @templatetypedef, I am not an expert but as far as I understand no. See e.g. page 109 in Soare's book. $\endgroup$ – Kaveh Jan 3 '16 at 19:02 $\begingroup$ @templatetypedef, ps1: there is some vagueness in the question about what we consider diagonalization. If we are not careful we may expand what we consider to be diagonalization every time we see something which was not. Take e.g. priority methods or any general method of constructing objects part by part in a way to avoid being equal to any object from a given class. $\endgroup$ – Kaveh Jan 3 '16 at 19:21 $\begingroup$ @David, :) I open the page from the book I want to share, click on the share button on top, and remove the parameters except the id and pg from the link. $\endgroup$ – Kaveh Jan 3 '16 at 20:35 this is not exactly an affirmative answer, but an attempt at something nearby to what is asked for via a creative angle. there are quite a few problems in physics now that are "far distant" from mathematical/ theoretical formulations of undecidability, and they seem increasingly "remote" from and "bear little resemblance to" the original formulations involving the halting problem etc.; of course they use the halting problem at the root but the chains of reasoning have become increasingly distant and also have a strong "applied" aspect/ nature. unfortunately there do not seem to be any great surveys in this area yet. a recent problem that was "surprisingly" proven undecidable in physics that has attracted a lot of attention: Undecidability of the spectral gap / Cubitt, Perez-Garcia, Wolf The spectral gap—the energy difference between the ground state and first excited state of a system—is central to quantum many-body physics. Many challenging open problems, such as the Haldane conjecture, the question of the existence of gapped topological spin liquid phases, and the Yang–Mills gap conjecture, concern spectral gaps. These and other problems are particular cases of the general spectral gap problem: given the Hamiltonian of a quantum many-body system, is it gapped or gapless? Here we prove that this is an undecidable problem. Specifically, we construct families of quantum spin systems on a two-dimensional lattice with translationally invariant, nearest-neighbour interactions, for which the spectral gap problem is undecidable. This result extends to undecidability of other low-energy properties, such as the existence of algebraically decaying ground-state correlations. what you seem to be observing in the question is that (informally) undecidability proofs all have a certain "self-referential" structure, and this has been formally proven in even more advanced mathematics, such that both the Turing halting problem and Godels theorem can be seen as instances of the same underlying phenomenon. see eg: Halting problem, uncomputable sets: common mathematical proof? The halting theorem, Cantor's theorem (the non-isomorphism of a set and its powerset), and Goedel's incompleteness theorem are all instances of the Lawvere fixed point theorem, which says that for any cartesian closed category, if there is an epimorphic map e:A→(A⇒B) then every f:B→B has a fixed point. there is also a long meditation on this theme of the (intrinsic?) interconnectedness of self-referentiality and undecidability in the books by Hofstadter. another area where undecidability results are common and were initially somewhat "surprising" is with fractal phenomena. the crosscutting appearance/ significance of undecidable phenomena across nature is nearly a recognized physical principle at this point, first observed by Wolfram as "principle of computational equivalence". vznvzn $\begingroup$ other "surprising/ applied" areas of undecidability: aperiodic tilings, eventual stabilization in conway game of Life (cellular automata) $\endgroup$ – vzn Dec 29 '15 at 22:47 $\begingroup$ My understanding is that the proofs that all of these problems are undecidable all boil down to reductions from the halting problem. Is that incorrect? $\endgroup$ – templatetypedef Dec 29 '15 at 23:45 $\begingroup$ the answer basically concedes that (all known undecidability results can be reduced to the halting problem). your question is nearly phrased as a conjecture, and am not aware of any conflicting knowledge to it, and see a lot of circumstantial evidence in favor of it. but the closest to a formal proof known is apparently the fixed-point formulations of undecidability (there does not seem to be other formal formulations of "self-referential".) another way of saying it all is that Turing completeness and undecidability are two views of essentially the same phenomenon. $\endgroup$ – vzn Dec 30 '15 at 16:28 Not the answer you're looking for? Browse other questions tagged computability proof-techniques undecidability or ask your own question. Undecidable problems limit physical theories Relationship between Undecidable Problems and Recursively Enumerable languages Is the length of the shortest quine in a programming language computable? undecidable problem and its negation is undecidable Halting problem without input? Is the language of TMs that accept finite languages in $\mathbf{0}'$? How do we know that the reduction is correct? What is the difference between undecidable language and Turing Recognizable language? Relation between Undecidable problems and NP-Hard
CommonCrawl
\begin{document} \title[]{Small data global existence and decay for \\ relativistic Chern--Simons equations} \author{Myeongju Chae} \address{Department of Mathematics, Hankyung University, Anseong-si, Gyeonggi-do, Korea} \email{[email protected]} \author{Sung-Jin Oh} \address{Department of Mathematics, UC Berkeley, Berkeley, CA, USA} \email{[email protected]} \thanks{The authors thank Hyungjin Huh for helpful discussions. M.~Chae was partially supported by NRF-2011-0028951. S.-J. Oh is a Miller Research Fellow, and acknowledges support from the Miller Institute. } \begin{abstract} We establish a general small data global existence and decay theorem for Chern--Simons theories with a general gauge group, coupled with a massive relativistic field of spin 0 or 1/2. Our result applies to a wide range of relativistic Chern--Simons theories considered in the literature, including the abelian/non-abelian self-dual Chern--Simons--Higgs equation and the Chern--Simons--Dirac equation. A key idea is to develop and employ a gauge invariant vector field method for relativistic Chern--Simons theories, which allows us to avoid the long range effect of charge. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} In this article, we consider Chern--Simons gauge theories with a general gauge group $\mathfrak{G}$ coupled with a massive field of spin 0 (Higgs) or 1/2 (Dirac) on $\mathbb R^{1+2}$. Our main result (Theorems~\ref{thm:CSH} and \ref{thm:CSD}) is global existence of a unique solution to small compactly supported initial data, along with sharp decay rates. We give a unified proof that applies to a wide range of relativistic Chern--Simons theories; some well-studied examples include the self-dual Chern--Simons--Higgs equation with abelian and non-abelian gauge groups, and the abelian Chern--Simons--Dirac equation. As will be explained in Section~\ref{subsec:main-ideas} in more detail, the main difficulty in the proof is the long range effect of charge, which manifests as the slow spatial decay of the magnetic potential. In order to overcome this difficulty, we develop a gauge covariant vector field approach \cite{MR1672001, MR2131047, Lindblad:2006vh, Bieri:2014lq} for relativistic Chern--Simons theories, which allows us to avoid the gauge potential in the analysis. An issue for executing this strategy is the anomalous commutation property of the Chern--Simons coupled Klein--Gordon equation; see \eqref{eq:comm-problem} for details. This issue is taken care of by adapting the ODE technique of \cite{MR2188297, MR2056833}, developed to address possible long range effects, to our gauge covariant setting. We begin by describing the Chern--Simons--Higgs and Dirac equations with a general gauge group in Sections~\ref{subsec:CSH} and \ref{subsec:CSD}, respectively. In Section~\ref{subsec:main-results}, we state the main results of the paper in precise terms. Section~\ref{subsec:main-ideas} contains an explanation of the main ideas of our proof, and in Section~\ref{subsec:history} we give a brief discussion of the history of the problem and related results. The introduction ends with a short outline of the rest of the paper in Section~\ref{subsec:outline}. \subsection{Non-abelian self-dual Chern--Simons--Higgs equation} \label{subsec:CSH} Here we first give a general formulation of the Chern--Simons--Higgs equation with a general gauge group $\mathfrak{G}$; see \eqref{eq:CSH-general}. This formulation requires a choice of a real scalar potential $\mathcal U(\varphi)$. We then describe a particular choice of $\mathcal U(\varphi)$ leading to the \emph{self-dual Chern--Simons--Higgs equation} \eqref{eq:CSH}. For concreteness, our first main theorem (Theorem~\ref{thm:CSH}) is stated for this equation, but our proof is clearly valid for more general potentials $\mathcal U(\varphi)$; see Remark~\ref{rem:CSH-general}. Important special cases of \eqref{eq:CSH} include the \emph{abelian self-dual equation} ($\mathfrak{G} = \mathrm{U}(1)$ and $V = \mathbb C$) and the \emph{non-abelian self-dual equation with adjoint coupling} ($\mathfrak{G} = \mathrm{SU}(n)$ and $V = \mathrm{sl}(n; \mathbb C)$); see Examples~\ref{ex:a-CSH} and \ref{ex:na-CSH} below. Consider a Lie group $\mathfrak{G}$ with the associated Lie algebra $\mathfrak{g}$, which possesses a positive-definite metric $\LieMet{\cdot}{\cdot}$ that is bi-invariant (i.e., invariant under the adjoint action $\mathfrak{G} \times \mathfrak{g} \ni (g, a) \mapsto g a g^{-1} \in \mathfrak{g}$). Let $V$ be a complex vector space equipped with an inner product $\brk{\cdot, \cdot}_{V}$, on which the group $\mathfrak{G}$ acts via a unitary representation $\rho: \mathfrak{G} \to \mathrm{U} (V)$. In what follows, the subscript $V$ in $\brk{\cdot, \cdot}_{V}$ will often be omitted. \begin{remark} When $\mathfrak{G}$ is \emph{compact}, which is the case in all examples below, a bi-invariant metric always exists, since any left-invariant metric can be made bi-invariant by averaging its right-translates using the Haar measure; recall that the Haar measure is finite and bi-invariant on compact Lie groups \cite[Chapter 1]{MR3136522}. Moreover, for any representation $\rho: \mathfrak{G} \to \mathrm{GL}(V)$ there exists an inner product on $V$ which makes $\rho$ unitary, by starting with any inner product $\brk{\cdot, \cdot}$ and averaging its left-translates $\brk{\rho(g) \, \cdot , \rho(g) \, \cdot }$ using the Haar measure. \end{remark} Let $\mathbb R^{1+2}$ denote the (2+1)-dimensional Minkowski space equipped with the metric \begin{equation*} \eta_{\mu \nu} = (\eta^{-1})^{\mu \nu} = \mathrm{diag} \, (\m1,\p1,\p1) \end{equation*} in the rectilinear coordinates $(x^{0}, x^{1}, x^{2})$. Let $E$ be a vector bundle with fiber $V$ over $\mathbb R^{1+2}$ with structure group $\mathfrak{G}$. We refer to the sections of $E$ as \emph{scalar multiplet fields}. Since $\mathbb R^{1+2}$ is contractible, every fiber bundle over this space is trivial, i.e., $E$ is (smoothly) equivalent to the product bundle $\mathbb R^{1+2} \times V$. Hence the scalar multiplet fields may be concretely realized as the $V$-valued functions on $\mathbb R^{1+2}$; see Section~\ref{subsec:gauge-str} below. In order to differentiate a scalar multiplet field, we introduce the notion of a \emph{covariant derivative} ${}^{(A)}\bfD$ on $E$, described by a $\mathfrak{g}$-valued 1-form $A(\cdot)$ in the following fashion: \begin{equation} \label{eq:covd-def} {}^{(A)}\bfD_{X} \varphi = \nabla_{X} \varphi + A(X) \cdot \varphi. \end{equation} Here $\varphi$ is a scalar multiplet field (i.e., a $V$-valued function), $X$ is a vector on $\mathbb R^{1+2}$, $\nabla_{X}$ is the usual directional derivative of $\varphi$ (viewed as a $V$-valued function) in the direction $X$ and $A(X) \in \mathfrak{g}$ acts on $\varphi$ by the infinitesimal representation $\mathrm{d} \rho \restriction_{I}: \mathfrak{g} \to \mathrm{u}(V)$. Given two vector fields $X, Y$ on $\mathbb R^{1+2}$, the associated \emph{curvature 2-form} $F = F[A]$ is defined by the relation \begin{equation} \label{eq:curv-def} F(X, Y) \varphi = \big( {}^{(A)}\bfD_{X} {}^{(A)}\bfD_{Y} - {}^{(A)}\bfD_{Y} {}^{(A)}\bfD_{X} - {}^{(A)}\bfD_{\LieBr{X}{Y}} \big) \varphi. \end{equation} In terms of the connection 1-form $A$, the curvature $F$ takes the form \begin{equation} \label{eq:curv-eq1} F = \mathrm{d} A + \frac{1}{2} [A \wedge A] \end{equation} (see Section~\ref{subsec:extr-calc} for the notation) or in coordinates, \begin{equation} \label{eq:curv-eq2} F_{\mu \nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} + \LieBr{A_{\mu}}{A_{\nu}}. \end{equation} In analogy with Maxwell's theory of electromagnetism, the $F_{12}$ component of the curvature 2-form is sometimes called the \emph{magnetic field}, and $F_{01}, F_{02}$ are referred to as the \emph{electric field}. The components $A_{1}, A_{2}$ are alternatively referred to as the \emph{magnetic potential}, and $A_{0}$ as the \emph{electric potential}. The Lagrangian density for the Chern--Simons--Higgs system is given by \begin{equation*} L[A, \varphi] = \frac{\kappa}{2} L_{CS}[A] - \brk{{}^{(A)}\bfD^{\mu} \varphi, {}^{(A)}\bfD_{\mu} \varphi} - \mathcal U(\varphi) \end{equation*} where $\kappa \in \mathbb R \setminus \set{0}$ is called the coupling constant and $\mathcal U(\varphi)$ is a real-valued scalar potential. The term $L_{CS}[A]$ is the \emph{Chern--Simons Lagrangian}, defined as \begin{equation*} L_{CS}[A] = \epsilon^{\mu \nu \rho} \Big( \brk{A_{\mu}, \partial_{\nu} A_{\rho}}_{\mathfrak{g}} + \frac{1}{3} \brk{A_{\mu}, [A_{\nu}, A_{\rho}]}_{\mathfrak{g}} \Big). \end{equation*} We say that $(A, \varphi)$ is a solution to the \emph{Chern--Simons--Higgs equation} if it is a formal critical point of the action $(A, \varphi) \mapsto \mathcal S[A, \varphi] = \int_{\mathbb R^{1+2}} L[A, \varphi] \, \mathrm{d} t \mathrm{d} x$. The corresponding Euler--Lagrange equation satisfied by the formal critical points takes the form \begin{equation} \label{eq:CSH-general} \left\{ \begin{aligned} {}^{(A)} \Box \varphi =& \frac{1}{2} \frac{\delta \mathcal U}{\delta \varphi}, \\ F = & \frac{1}{\kappa} (\star J_{\mathrm{CSH}}), \\ J_{\mathrm{CSH}} =& \brk{\mathcal T \varphi, {}^{(A)}\ud \varphi} + \brk{{}^{(A)}\ud \varphi, \mathcal T \varphi}. \end{aligned} \right. \end{equation} Here ${}^{(A)} \Box = {}^{(A)}\bfD^{\mu} \, {}^{(A)}\bfD_{\mu}$ is the covariant d'Alembertian, ${}^{(A)}\ud \varphi = {}^{(A)}\bfD_{\mu} \varphi \, \mathrm{d} x^{\mu}$ is the covariant differential of $\varphi$ and $\star$ is the Hodge star (see Section~\ref{subsec:extr-calc}). The notation $\frac{\delta \mathcal U}{\delta \varphi}$ refers to the functional derivative of $\mathcal U(\varphi)$, characterized by \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} s} \Big\vert_{s = 0} \Big( \int_{\mathbb R^{1+2}} \mathcal U(\varphi + s f) \, \mathrm{d} t \mathrm{d} x \Big) = \int_{\mathbb R^{1+2}} \mathrm{Re} \brk{\frac{\delta \mathcal U}{\delta \varphi}, f} \, \mathrm{d} t \mathrm{d} x \end{equation*} for all $V$-valued $f \in C^{\infty}_{0}(\mathbb R^{1+2})$. The linear operator $\mathcal T : V \to \mathfrak{g} \otimes_{\mathbb R} V$ is defined as follows: Given an orthonormal basis $\set{e_{A}} \subseteq \mathfrak{g}$ with respect to $\brk{\cdot, \cdot}_{\mathfrak{g}}$, let \begin{equation*} \mathcal T v = \sum_{A} e_{A} \otimes \mathcal T^{A} v, \quad \hbox{ where } \mathcal T^{A} : V \to V, \ v \mapsto e_{A} \cdot v \hbox{ for each index } A. \end{equation*} A compact way of denoting $\mathcal T^{A}$ while respecting the difference between upper and lower indices is to write $\mathcal T^{A} v = \sum_{A'} \delta^{A A'} e_{A'} \cdot v$, where $\delta^{AA'}$ is the diagonal symbol that equals $1$ when $A = A'$ and vanishes otherwise. The inner product between $\mathcal T v \in \mathfrak{g} \otimes_{\mathbb R} V$ and $w \in V$ is naturally defined to be an element of $\mathfrak{g}$ by the formula \begin{equation*} \brk{\mathcal T v, w} = \sum_{A} \brk{\mathcal T^{A} v, w} e_{A}, \quad \brk{w, \mathcal T v} = \sum_{A} \brk{w, \mathcal T^{A} v} e_{A}. \end{equation*} According to these definitions, note that $\brk{a \cdot v, w} = \brk{a, \brk{\mathcal T v , w}}_{\mathfrak{g}}$ for $a \in \mathfrak{g}$ and $v, w \in V$. Note also that $\brk{\mathcal T \varphi, {}^{(A)}\ud \varphi}$ and $\brk{{}^{(A)}\ud \varphi, \mathcal T \varphi}$ in \eqref{eq:CSH-general} define $\mathfrak{g}$-valued 1-forms. In the study of the Chern--Simons--Higgs equation, a special emphasis is given to the \emph{self-dual} case, in which the energy functional has a particular structure so that its minima can be found by solving a simpler first order elliptic equation (Bogomol'nyi equation). In this case, the scalar potential $\mathcal U$ is given by \begin{equation} \label{eq:CSH-ptnl-lagrange} \mathcal U(\varphi) = \frac{1}{\kappa^{2}} \Big\vert \delta_{AA'} \brk{\mathcal T^{A} \varphi, \varphi} \mathcal T^{A'} \varphi+ v^{2} \varphi \Big\vert^{2}, \end{equation} where $v \in \mathbb R \setminus \set{0}$ is a constant playing the role of the mass parameter for $\varphi$. Computing the functional derivative, we are led to the following \emph{self-dual Chern--Simons--Higgs equation}: \begin{equation} \label{eq:CSH} \tag{CSH} \left\{ \begin{aligned} {}^{(A)} \Box \varphi - \frac{v^{4}}{\kappa^{2}} \varphi =& U_{\mathrm{CSH}}(\varphi), \\ F = & \frac{1}{\kappa} (\star J_{\mathrm{CSH}}), \\ J_{\mathrm{CSH}} =& \brk{\mathcal T \varphi, {}^{(A)}\ud \varphi} + \brk{{}^{(A)}\ud \varphi, \mathcal T \varphi}. \end{aligned} \right. \end{equation} where \begin{equation} \label{eq:CSH-ptnl} \begin{aligned} U_{\mathrm{CSH}}(\varphi) = & \frac{4 v^{2}}{\kappa^{2}} \delta_{A A'} \brk{\mathcal T^{A} \varphi, \varphi} \mathcal T^{A'} \varphi \\ & + \frac{1}{\kappa^{2}} \delta_{A A'} \delta_{B B'} \brk{\mathcal T^{A} \varphi, \varphi} \brk{(\mathcal T^{A'} \mathcal T^{B'} + \mathcal T^{B'} \mathcal T^{A'}) \varphi, \varphi} \mathcal T^{B} \varphi \\ & + \frac{1}{\kappa^{2}} \delta_{A A'} \delta_{B B'}\brk{\mathcal T^{A} \varphi, \varphi} \brk{\mathcal T^{B} \varphi, \varphi} \mathcal T^{A'} \mathcal T^{B'} \varphi . \end{aligned} \end{equation} Our first main theorem (Theorem~\ref{thm:CSH}) is small data global existence for the general self-dual Chern--Simons--Higgs equation \eqref{eq:CSH}. We remark that Theorem~\ref{thm:CSH} is stated for \eqref{eq:CSH} only for the sake of concreteness. In fact, due to the perturbative nature of the proof, self-duality is not essential for this theorem to hold; see Remark~\ref{rem:CSH-general}. We now describe important special cases of \eqref{eq:CSH}. We begin with the case of the abelian gauge group $\mathfrak{G} = \mathrm{U}(1)$, which has been extensively studied. \begin{example}[Abelian self-dual Chern--Simons--Higgs {\cite[Section~IV.A]{dunne1995self}}] \label{ex:a-CSH} Let $\mathfrak{G} = \mathrm{U}(1) = \set{e^{i \theta} \in \mathbb C}$, so that $\mathfrak{g} = \mathrm{u}(1) = i \mathbb R$ and $\brk{i a, i b}_{\mathfrak{g}} = ab$ for $a, b \in \mathbb R$. Take $V = \mathbb C$, equipped with the usual inner product $\brk{z, w} = z \overline{w}$, and let $\rho(e^{i \theta}) z= e^{i \theta} z$ for $e^{i \theta} \in \mathrm{U}(1)$ and $z \in \mathbb C$. Using $i$ as a basis for $\mathfrak{g} = \mathrm{u}(1)$, we may write $\mathcal T v = i v$ and ${}^{(A)}\bfD = \nabla + i A$ for a real-valued 1-form $A$. Therefore, \begin{equation*} J_{\mathrm{CSH}} = i \big( \varphi \overline{{}^{(A)}\ud \varphi} - \overline{\varphi} \, {}^{(A)}\ud \varphi \big). \end{equation*} The self-dual potential is given by \begin{equation*} \mathcal U(\varphi) = \frac{1}{\kappa^{2}} \abs{\varphi}^{2} \big( \abs{\varphi}^{2} - v^{2} \big)^{2}. \end{equation*} for some $v \in \mathbb R \setminus \set{0}$. Hence $U_{\mathrm{CSH}}(\varphi)$ takes the form \begin{equation*} U_{\mathrm{CSH}}(\varphi) = \frac{1}{\kappa^{2}} \Big( - 4 v^{2} \abs{\varphi}^{2} \varphi + 3 \abs{\varphi}^{4} \varphi \Big). \end{equation*} \end{example} Another important special case of \eqref{eq:CSH} is when the structure group $\mathfrak{G}$ is $\mathrm{SU}(N)$ $(N > 1)$, and it acts on the space $\mathrm{sl}(N, \mathbb C)$ (complexification of the Lie algebra $\mathrm{su}(N)$) by the adjoint action. \begin{example}[Non-abelian self-dual Chern--Simons--Higgs with adjoint coupling {\cite[Section~V.B]{dunne1995self}}] \label{ex:na-CSH} Let $\mathfrak{G} = \mathrm{SU}(N)$ $(N > 1)$ be the group of $N \times N$ unitary matrices with unit determinant, so that $\mathfrak{g} = \mathrm{su}(N)$ is the Lie algebra of $N \times N$ anti-hermitian matrices with zero trace and $\brk{a, b}_{\mathfrak{g}} = \textrm{tr}(a b^{\dagger})$ for matrices $a, b$. We take the state space to be the complexification of the Lie algebra $\mathfrak{g} = \mathrm{su}(N)$, i.e., $V = \mathrm{sl}(N, \mathbb C)$ is the space of $N \times N$ complex matrices with zero trace and $\brk{v, w}_{V} = \textrm{tr} (v w^{\dagger})$. The group $\mathfrak{G}$ acts on $V$ via the adjoint action $\rho(g) v = g v g^{-1}$ for $g \in \mathrm{SU}(N)$ and $v \in \mathrm{sl}(N, \mathbb C)$. In this case, the current $J_{\mathrm{CSH}}$ and the self-dual potential $\mathcal U(\varphi)$ take the form \begin{align*} J_{\mathrm{CSH}} =& -\LieBr{\varphi^{\dagger}}{{}^{(A)}\ud \varphi} + \LieBr{({}^{(A)}\ud \varphi)^{\dagger}}{\varphi}, \\ \mathcal U(\varphi) = & \frac{1}{\kappa^{2}} \Big\vert \LieBr{\LieBr{\varphi}{\varphi^{\dagger}}}{\varphi} + v^{2} \varphi \Big\vert^{2}, \end{align*} for some $v \in \mathbb R \setminus \set{0}$. Hence $U_{\mathrm{CSH}}(\varphi)$ is given by \begin{equation*} U_{\mathrm{CSH}}(\varphi) = \frac{4 v^{2}}{\kappa^{2}} \LieBr{\varphi}{\LieBr{\varphi}{\varphi^{\dagger}}} + \frac{1}{\kappa^{2}} \Big( 2 \LieBr{\LieBr{\varphi}{\LieBr{\varphi^{\dagger}}{\LieBr{\varphi}{\varphi^{\dagger}}}}}{\varphi} + \LieBr{\LieBr{\varphi}{\LieBr{\varphi}{\varphi^{\dagger}}}}{\LieBr{\varphi}{\varphi^{\dagger}}} \Big). \end{equation*} \end{example} \subsection{Non-abelian Chern--Simons--Dirac equations} \label{subsec:CSD} Here we describe the Chern--Simons--Dirac equation with a general gauge group $\mathfrak{G}$. Our formulation includes the well-studied abelian case \cite{MR2290338} as a special case; see Example~\ref{ex:a-CSD}. Let $\mathfrak{G}$ be a Lie group with a positive-definite bi-invariant metric, $W$ be a complex vector space with an inner product $\brk{\cdot, \cdot}_{W}$, and $\rho : \mathfrak{G} \to U(W)$ be a unitary representation. In order to describe the Chern--Simons--Dirac system with a general gauge group $\mathfrak{G}$, we first need to describe the \emph{spinor multiplet fields} on $\mathbb R^{1+2}$. Let $\gamma^{\mu}$ ($\mu = 0,1,2$) be the \emph{gamma matrices}, which are $\mathbb C$-valued $2 \times 2$ matrices satisfying \begin{equation} \label{eq:gmm-mat} \gamma^{\mu} \gamma^{\nu} + \gamma^{\nu} \gamma^{\mu} = - 2 (\eta^{-1})^{\mu \nu} \, {\bf I}_{2 \times 2} \, . \end{equation} The standard representations of $\gamma^{\mu}$ are given by \begin{align*} \gamma^{0} = \left( \begin{array}{cc} 1 & 0 \\ 0 & - 1 \end{array} \right), \quad \gamma^{1} = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right), \quad \gamma^{2} = \left( \begin{array}{cc} 0 & -i \\ -i & 0 \end{array} \right). \end{align*} The space of \emph{spinors} associated to the Minkowski space $(\mathbb R^{1+2}, \eta)$ is simply $\Delta = \mathbb C^{2}$, on which the gamma matrices act by matrix multiplication, and the spinor bundle is the trivial bundle $S = \mathbb R^{1+2} \times \Delta$. Let $\tilde{E}$ be a vector bundle with fiber $W$ and structure group $\mathfrak{G}$. The bundle of \emph{spinor multiplets} is the tensor product $E = S \otimes_{\mathbb C} \tilde{E}$, whose fiber is $V = \Delta \otimes_{\mathbb C} W$. Using the triviality of the bundle $E$, we will identify the sections (or \emph{spinor multiplet fields}) of $E$ with $V$-valued functions on $\mathbb R^{1+2}$. The gamma matrices $\gamma^{\mu}$ and the elements $g \in \mathfrak{G}$, $a \in \mathfrak{g}$ act on $V$ by the rules \begin{equation*} \gamma^{\mu} (s \otimes w) = \gamma^{\mu} s \otimes w, \quad g \cdot (s \otimes w) = s \otimes \rho(g) w, \quad a \cdot (s \otimes w) = s \otimes (\mathrm{d} \rho \restriction_{I} \! (a) w), \end{equation*} where $s \in \Delta$ and $w \in W$. Moreover, the inner products on $\Delta$ and $W$ induce an inner product $\brk{\cdot, \cdot}_{V}$ on $V$, characterized by \begin{equation*} \brk{s_{1} \otimes w_{1}, s_{2} \otimes w_{2}}_{V} = (s_{2}^{\dagger} s_{1}) \brk{w_{1}, w_{2}}_{W}, \end{equation*} where $s_{1}, s_{2} \in \Delta$ and $w_{1}, w_{2} \in W$. Note that $\gamma^{0}$ is hermitian, $\gamma^{j}$ ($j=1,2$) is anti-hermitian, $g \in \mathfrak{G}$ is unitary and $a \in \mathfrak{g}$ is anti-hermitian with respect to $\brk{\cdot, \cdot}_{V}$. Given a $\mathfrak{g}$-valued connection 1-form $A$, a spinor multiplet field $\psi$ and a vector $X$ on $\mathbb R^{1+2}$, we define the gauge covariant derivative ${}^{(A)}\bfD_{X}$ in the direction $X$ associated to $A$ by \begin{equation} \label{eq:covd-def-CSD} {}^{(A)}\bfD_{X} \psi = \nabla_{X} \psi +A(X) \cdot \psi. \end{equation} The curvature 2-form $F$ is defined by \eqref{eq:curv-def} as in the case of Chern--Simons--Higgs. In addition, we introduce the \emph{covariant Dirac operator}, defined by \begin{equation*} {}^{(A)} \! \not \!\! \bfD := \gamma^{\mu} \, {}^{(A)}\bfD_{\partial_{\mu}} . \end{equation*} The Chern--Simons--Dirac Lagrangian density is given by \begin{equation*} L[A, \psi] = \frac{\kappa}{2} L_{CS} + i \brk{{}^{(A)} \! \not \!\! \bfD \psi, \gamma^{0} \psi} + m \brk{\psi, \gamma^{0} \psi}. \end{equation*} where $\kappa \neq 0$ is the coupling constant and $m > 0$ is the mass of the spinor multiplet field $\psi$ and $\brk{\cdot, \cdot} = \brk{\cdot, \cdot}_{V}$. The \emph{Chern--Simons--Dirac equation} for $(A, \psi)$ is the Euler--Lagrange equation for the action $\mathcal S[A, \psi] = \int_{\mathbb R^{1+2}} L[A, \psi] \, \mathrm{d} t \mathrm{d} x$, and takes the form \begin{equation}\label{eq:CSD} \tag{CSD} \left\{ \begin{aligned} i {}^{(A)} \! \not \!\! \bfD \psi + m \psi =& 0 \\ F =& \frac{1}{\kappa} (\star J_{\mathrm{CSD}}) \\ J_{\mathrm{CSD}}(\partial_{\mu}) =& - i \eta_{\mu \nu} \brk{\gamma^{0} \gamma^{\nu} \mathcal T \psi, \psi}. \end{aligned} \right. \end{equation} Here $\mathcal T v \in \mathfrak{g} \otimes V$ for $v \in V$ is again defined as $\mathcal T v = \sum_{A} e_{A} \otimes \mathcal T^{A} v$ with $\mathcal T^{A} v = \sum_{A'} \delta^{A A'} e_{A'} \cdot v$, where $\set{e_{A}}$ is any orthonormal basis for $\mathfrak{g}$ with respect to $\brk{\cdot, \cdot}_{\mathfrak{g}}$. The matrix $\gamma^{0} \gamma^{\nu}$ acts on $\mathcal T v$ in the natural fashion, i.e., $\gamma^{0} \gamma^{\nu} \mathcal T v = \sum_{A} e_{A} \otimes \gamma^{0} \gamma^{\nu} \mathcal T^{A} v$. An important special case of \eqref{eq:CSD} is when the gauge group is abelian, i.e., $\mathfrak{G} = \mathrm{U}(1)$. \begin{example}[Abelian Chern--Simons--Dirac \cite{MR2290338}] \label{ex:a-CSD} Let $\mathfrak{G} = \mathrm{U}(1)$, $\mathfrak{g} = \mathrm{u}(1) = i \mathbb R$ and $\brk{i a, i b}_{\mathfrak{g}} = ab$ for $a, b \in \mathbb R$. Taking $W = \mathbb C$ with the usual action of $\mathrm{U}(1)$, we have the natural equivalence $V = \Delta \otimes_{\mathbb C} \mathbb C \cong \Delta = \mathbb C^{2}$ and $e^{i \theta} \in \mathrm{U}(1)$ acts on this space by component-wise multiplication. Then the 1-form $J_{\mathrm{CSD}}$ takes the form \begin{equation*} J_{\mathrm{CSD}}(\partial_{\mu}) = \eta_{\mu \nu} (\psi^{\dagger} \gamma^{0} \gamma^{\nu} \psi). \end{equation*} \end{example} \subsection{Main theorems} \label{subsec:main-results} We now state precisely the main theorems of this paper, which are small data global existence and decay results for the general Chern--Simons--Higgs and Dirac equations formulated above. We begin with the case of \eqref{eq:CSH}. We say that a triplet $(a, f, g)$ of a $\mathfrak{g}$-valued 1-form $a = a_{1} \mathrm{d} x^{1} + a_{2} \mathrm{d} x^{2}$ and $V$-valued functions $f, g$ on $\Sigma_{0} = \set{0} \times \mathbb R^{2}$ is an \emph{initial data set for \eqref{eq:CSH}} if it obeys the \emph{\eqref{eq:CSH} constraint equation}, i.e., \begin{equation} \partial_{1} a_{2} - \partial_{2} a_{1} + \LieBr{a_{1}}{a_{2}} = - \frac{1}{\kappa} \Big( \brk{\mathcal T f, g} + \brk{g, \mathcal T f} \Big). \end{equation} We say that $(A, \varphi)$ is a solution to the initial value problem (IVP) for \eqref{eq:CSH} with data $(a, f, g)$ if $(A, \varphi)$ solves \eqref{eq:CSH} and obeys \begin{equation*} (A , \varphi, {}^{(A)}\bfD_{0} \varphi) \restriction_{\Sigma_{0}} = (a, f, g), \end{equation*} where the notation $\restriction_{\Sigma_{0}}$ refers to the pullback along the embedding $\Sigma_{0} \hookrightarrow \mathbb R^{1+2}$; see the end of Section~\ref{subsec:extr-calc} for the precise definition. Note that the constraint equation is precisely the pullback of the equation $F = \frac{1}{\kappa} \star J_{\mathrm{CSH}}$ along the embedding $\Sigma_{0} \hookrightarrow \mathbb R^{1+2}$; hence it necessarily holds for $(a, f, g)$ if a solution to the IVP exists. The precise statement of the main theorem for \eqref{eq:CSH} is as follows. \begin{theorem} \label{thm:CSH} Consider the IVP for \eqref{eq:CSH} with $v \neq 0$ and $\kappa \neq 0$. There exists a positive function $\delta_{1}(R)$ of $R \in (0, \infty)$ such that the following holds. Let $(a, f, g)$ be a smooth initial data set for \eqref{eq:CSH} obeying \begin{equation} \label{eq:CSH-id} {\mathrm{supp}} \, (f,g) \subseteq B_{R}, \quad \sum_{k=1}^{5} \nrm{({}^{(A, \Sgm_{0})}\bfD^{(k)} f, {}^{(A, \Sgm_{0})}\bfD^{(k-1)} \, g)}_{L^{2}(\mathbb R^{2})} + \nrm{f}_{L^{2}(\mathbb R^{2})} \leq \epsilon, \end{equation} where $B_{R} := \set{x \in \mathbb R^{2} : \abs{x} < R}$ and ${}^{(A, \Sgm_{0})}\bfD$ is the (induced) gauge covariant derivative on $\set{0} \times \mathbb R^{2}$. If $\epsilon \leq \delta_{1}(R)$, then a smooth solution to the IVP exists globally, and it is unique up to smooth local gauge transformations. Moreover, the solution $(\phi, A)$ exhibits the following gauge invariant asymptotic behavior: \begin{equation} \abs{\phi(t,x)} + \abs{{}^{(A)}\bfD \phi(t,x)} < C \epsilon (1+\abs{t})^{-1}. \end{equation} \end{theorem} By \emph{uniqueness up to smooth local gauge transformations}, we mean the following: Given two solutions $(A, \varphi)$, $(A', \varphi')$ to the IVP for \eqref{eq:CSH}, there exists an open covering $\set{O_{\alpha}}_{\alpha \in \mathcal A}$ of $\mathbb R^{1+2}$ and smooth functions (local gauge transformations) $\set{U_{\alpha} : O_{\alpha} \to \mathfrak{G}}_{\alpha \in \mathcal A}$ such that the gauge transform of $(A, \varphi) \restriction_{O_{\alpha}}$ by $U_{\alpha}$ equals $(A', \varphi')$, i.e., \begin{equation*} (A', \varphi')(t,x) = (U_{\alpha} A U_{\alpha}^{-1} - \mathrm{d} U_{\alpha} U_{\alpha}^{-1}, U_{\alpha} \cdot \varphi)(t,x) \quad \hbox{ for every } (t,x) \in O_{\alpha}. \end{equation*} \begin{remark} \label{rem:CSH-general} As one may expect from the perturbative nature of the statement, the exact self-duality of \eqref{eq:CSH} is unnecessary for Theorem~\ref{thm:CSH} to hold. It will be clear from our proof that the important points are: $\mathcal U(\varphi)$ has a positive mass term $m^{2} \abs{\varphi}^{2}$ $(m \neq 0)$ and the remaining terms of $\mathcal U(\varphi)$ are quartic or higher in $\varphi$, so that $U(\varphi)$ is cubic or higher. \end{remark} Next, we consider the case of \eqref{eq:CSD}. We say that a pair $(a, \psi_{0})$ of a $\mathfrak{g}$-valued 1-form $a = a_{1} \mathrm{d} x^{1} + a_{2} \mathrm{d} x^{2}$ and $V = \Delta \otimes W$-valued functions $\psi_{0}$ on $\Sigma_{0}$ is an \emph{initial data set for \eqref{eq:CSD}} if it obeys the \emph{\eqref{eq:CSD} constraint equation}, i.e., \begin{equation} \label{eq:CSD-constraint} \partial_{1} a_{2} - \partial_{2} a_{1} + \LieBr{a_{1}}{a_{2}} = - \frac{i}{\kappa} \brk{\mathcal T \psi_{0}, \psi_{0}} \end{equation} We say that $(A, \psi)$ is a solution to the IVP for \eqref{eq:CSD} with data $(a, \psi_{0})$ if $(A, \psi)$ solves \eqref{eq:CSD} and obeys \begin{equation*} (A, \psi) \restriction_{\Sigma_{0}} = (a, \psi_{0}). \end{equation*} Again, since the constraint equation \eqref{eq:CSD-constraint} is a part of \eqref{eq:CSD}, it necessarily holds for $(a, \psi_{0})$ if a solution to the IVP exists. We now state our main theorem for \eqref{eq:CSD}. \begin{theorem} \label{thm:CSD} Consider the IVP for \eqref{eq:CSD} with $m \neq 0$ and $\kappa \neq 0$. There exists a positive function $\delta_{2}(R)$ of $R \in (0, \infty)$ such that the following holds: Let $(a, \psi_{0})$ be a smooth initial data set obeying \begin{equation} \label{eq:CSD-id} {\mathrm{supp}} \, \psi_{0} \subseteq B_{R}, \quad \sum_{k=0}^{5} \nrm{{}^{(A, \Sgm_{0})}\bfD^{(k)} \psi_{0}}_{L^{2}(\mathbb R^{2})} < \epsilon. \end{equation} If $\epsilon \leq \delta_{2}(R)$, then a smooth solution to the IVP exists globally on $\mathbb R^{1+2}$, and it is unique up to smooth local gauge transformations. Moreover, the solution $(\psi, A)$ exhibits the following gauge invariant asymptotic behavior: \begin{equation} \abs{\psi(t,x)} + \abs{{}^{(A)}\bfD \psi(t,x)} < C \epsilon (1+\abs{t})^{-1}. \end{equation} \end{theorem} The notion of uniqueness up to smooth local gauge transformations is defined as in the case of \eqref{eq:CSH}. We conclude this section with a few remarks. \begin{remark} For \eqref{eq:CSH}, global existence and regularity for initial data of arbitrary size have been already established in the abelian case (Example~\ref{ex:a-CSH}); see \cite{Chae:2002eu, Selberg:2012vb, Oh:2013bq}. This result is essentially proved by iterating a local well-posedness theorem with the help of the conserved energy of the system, with respect to which \eqref{eq:CSH} is subcritical. Even when global regularity is known, however, Theorem \ref{thm:CSH} provides complementary information about the asymptotic decay of the solution, at least in the regime of small compactly supported initial data. On the other hand, for \eqref{eq:CSD} a similar global regularity statement is not available even in the abelian case; to our knowledge, Theorem~\ref{thm:CSD} is the first global existence result for \eqref{eq:CSD}. \end{remark} \begin{remark} Dependence of $\delta_{1}$ and $\delta_{2}$ on the size $R$ of the support of the matter field is a technical condition, which is common in the literature of nonlinear Klein-Gordon equations. It arises from the use of foliation by hyperboloids (see Subsection \ref{subsec:polar-coords}), which only covers the domain of dependence of a ball in $\set{t=0}$ (or equivalently, an outgoing null cone). One idea for removing this condition is to prove a separate global existence and decay theorem in the domain of dependence of $\set{t=0} \setminus B_{R}$. In this region, one may exploit the improved rate of decay for solutions to the free Klein-Gordon equation, namely $t^{-N}$ for any $N$ as opposed to $t^{-1}$ in the case considered in the present paper. \end{remark} \subsection{Main ideas} \label{subsec:main-ideas} In this subsection, we discuss the key difficulties of the problem and thereby motivate the main ideas of the paper. To keep the discussion simple and concrete, we mostly focus on the special case of the abelian self-dual Chern--Simons--Higgs equation (Example~\ref{ex:a-CSH}), where we furthermore fix $v = \kappa = 1$. Unless otherwise specified, we let $(A, \phi)$ denote a solution to this system on $\mathbb R^{1+2}$, which is assumed to be smooth and suitably decaying in space. \subsubsection*{The problem of magnetic charge} The main difficulty for studying the precise asymptotic behavior of a solution is the possible long range effect of the total magnetic charge of the system, which is defined by \begin{equation*} q = \int_{\set{t} \times \mathbb R^{2}} F. \end{equation*} By integrating the equation $\mathrm{d} F = \mathrm{d}^{2} A = 0$ over sets of the form $(t_{1}, t_{2}) \times \mathbb R^{2}$ and applying Stoke's theorem, it follows that $q$ is conserved in time. On the other hand, integrating $\mathrm{d} A = F$ over a ball of the form $\set{t} \times B_{R}$, where $t \in \mathbb R$ and $R > 0$, we see that \begin{equation*} \int_{\set{t} \times \partial B_{R}} A = \int_{\set{t} \times B_{R}} F \to q \quad \hbox{ as } R \to \infty. \end{equation*} For generic initial data, the total magnetic charge $q$ would be non-zero. In this case, the preceding computation shows that a part of $A$ has a long range tail $q r^{-1}$ as $r \to \infty$. This behavior is potentially problematic, since upon expansion the covariant Klein--Gordon equation for $\phi$ has a quadratic term of the form $2 i A^{\mu} \partial_{\mu} \phi$ and $r^{-1}$ is not integrable. \subsubsection*{Gauge covariant vector field method for Chern--Simons theories} To overcome the above difficulty, we observe that the $r^{-1}$ tail manifests itself in gauge dependent fields, such as $A$, but not for gauge covariant fields, such as $F$. In fact, note that $F$ is compactly supported if $\phi$ is. These considerations suggest that it might be favorable to analyze the long time behavior of solutions to Chern--Simons theories in a \emph{gauge covariant} fashion. To this end, we develop and employ a gauge covariant version of the celebrated vector field method, which originated in \cite{MR784477,Klainerman:tc, MR1316662} in the context of the wave equation, and was first used in the context of Klein--Gordon equations in \cite{MR803252}. The key idea of the gauge covariant vector field method is to replace the commuting vector fields $Z_{\mu \nu}$ (see Section~\ref{subsec:KillingVF}) by their gauge covariant analogues \begin{equation*} Z_{\mu \nu} = x_{\mu} \partial_{\nu } - x_{\nu} \partial_{\mu} \quad \mapsto \quad \bfZ_{\mu \nu} = {}^{(A)}\bfD_{Z_{\mu \nu}} = x_{\mu} {}^{(A)}\bfD_{\nu} - x_{\nu} {}^{(A)}\bfD_{\mu}, \end{equation*} expressed in the rectilinear coordinates $(x^{0}, x^{1}, x^{2})$. On one hand, we develop a geometric formalism based on exterior differential calculus and Hodge duality, which seem natural for Chern--Simons theories, to compute iterated commutators of $\bfZ_{\mu \nu}$ with the Chern--Simons system (see Section~\ref{sec:comm}). On the other hand, we establish a gauge invariant Klainerman--Sobolev inequality (Proposition~\ref{prop:KlSob}), which converts boundedness of generalized energy constructed by commutation of $\bfZ_{\mu \nu}$ to pointwise decay. A gauge invariant version of the Klainerman--Sobolev inequality for the Klein--Gordon equation was first proved by Psarelli in the work \cite{MR1672001, MR2131047} on the massive Maxwell--Klein--Gordon and Maxwell--Dirac equations in $\mathbb R^{1+3}$. We remark that a gauge covariant vector field method was employed in the study of massless Maxwell--Klein--Gordon equation in $\mathbb R^{1+3}$ as well; see \cite{Lindblad:2006vh, Bieri:2014lq}. Furthermore, a suitable version of this method proved to be useful in the small data global existence problem for the closely related Chern--Simons--Schr\"odinger equation in $\mathbb R^{1+2}$ \cite{Oh:2013tx}. \subsubsection*{The problem of anomalous commutation} The success of the vector field method relies on a good commutation property of the system with the commuting vector fields, which in this case are $\bfZ_{\mu \nu}$. However, it turns out that the Chern--Simons theories exhibit an \emph{anomalous} commutation property with $\bfZ_{\mu \nu}$, which is a priori problematic. To demonstrate this issue in more detail, we begin by computing (up to the main term) the commutator between $\bfZ_{\mu \nu}$ and the covariant Klein--Gordon operator ${}^{(A)} \Box - 1$ using the Chern--Simons equation $F = \star J$: \begin{equation} \label{eq:comm-problem} \begin{aligned} \LieBr{\bfZ_{\mu \nu}}{{}^{(A)} \Box - 1} \varphi = & \iota_{Z_{\mu \nu}} \star ( (\mathrm{d} J) \wedge \varphi) - 2 \iota_{Z_{\mu \nu}} \star ( J \wedge {}^{(A)}\ud \varphi) + (\hbox{l.o.t.}) \\ = & N_{cubic} + (\hbox{l.o.t.}) \end{aligned} \end{equation} where $(\hbox{l.o.t.})$ denotes terms which are quintic and higher in $\varphi$ and \begin{equation*} N_{cubic} = \iota_{Z_{\mu \nu}} \star (\varphi \wedge \overline{{}^{(A)}\ud \varphi} \wedge {}^{(A)}\ud \varphi). \end{equation*} A simple computation (see Lemma~\ref{lem:ptwise-N}) shows that, in general, the best one can say is \begin{equation*} \abs{N_{cubic}} = \abs{\iota_{Z_{\mu \nu}} \star (\varphi \wedge \overline{{}^{(A)}\ud \varphi} \wedge {}^{(A)}\ud \varphi)} \leq C \abs{\varphi} \abs{\bfT \varphi} \abs{\bfS \varphi}, \end{equation*} where ${\bf T}$ denotes one of $\set{{}^{(A)}\bfD_{0}, {}^{(A)}\bfD_{1}, {}^{(A)}\bfD_{2}}$ and ${\bf S}$ is the gauge covariant analogue the scaling vector field, i.e., ${\bf S} = x^{\mu} \, {}^{(A)}\bfD_{\mu}$. The appearance of ${\bf S} \varphi$ is undesirable, since $\bfS$ does not commute well with ${}^{(A)} \Box - 1$. Indeed, comparing\footnote{Let $f$ be a solution to the free Klein--Gordon equation $(\Box - 1) f = 0$ with compactly supported data. In general, the sharp decay rate for $\nabla f$ and $f$ is $\tau^{-1}$, where $\tau = \sqrt{t^{2} - r^{2}}$. Since each $Z_{\mu \nu}$ commutes with $\Box - 1$, note that $\abs{Z_{\mu \nu} f} \lesssim \tau^{-1}$ as well. If, in addition, $\abs{S f} \lesssim \tau^{-\alpha}$ for any $\alpha > 0$, then it can be proved that $\abs{\nabla f} \lesssim \tau^{-\min \set{1+\alpha, 2}}$, which is impossible in general.} with the free case, we see that $\abs{\bfS \varphi}$ should not exhibit any decay in time. In general, one may hope for uniform boundedness of $\abs{\bfS \varphi}$ at best. This fact renders the nonlinearity $N_{cubic}$ essentially quadratic, which is borderline for closing the proof of global existence with only the (gauge invariant) Klainerman--Sobolev inequality. In fact, even when we assume the sharp decay rate \begin{equation} \label{eq:sharp-decay} \abs{\varphi} + \abs{\bfN \varphi} \lesssim \epsilon t^{-1}, \end{equation} where ${\bf N} = \frac{1}{\sqrt{t^{2} - r^{2}}} {\bf S}$ is the normalization of ${\bf S}$, the gauge covariant vector field method discussed so far seems to only lead to a weak decay rate \begin{equation} \label{eq:weak-decay} \abs{\bfZ^{(m)} \varphi} + \abs{\bfN \bfZ^{(m-1)} \varphi} \lesssim \epsilon t^{-1} \log^{m+2} (1+t) \end{equation} due to the above anomalous commutation property. In particular, this decay is insufficient to recover the sharp decay rate \eqref{eq:sharp-decay}. \subsubsection*{Gauge covariant ODE method for decay} To solve the problem of anomalous commutation, we begin by observing that the equation for $\varphi$ itself without any commutation with $\bfZ_{\mu \nu}$, \begin{equation} \label{eq:KG4phi} ({}^{(A)} \Box - 1) \varphi = U_{\mathrm{CSH}}(\varphi), \end{equation} is favorable in the sense that $U_{\mathrm{CSH}}(\varphi)$ is at least cubic or higher in $\varphi$, and no nonlinearity containing ${\bf S} \varphi$ is present. If one is able to work directly with this equation, then one may hope to prove that at least the undifferentiated field $\varphi$ obeys the sharp decay rate $t^{-1}$. Fortunately, this is indeed the case. We first rewrite ${}^{(A)} \Box - 1$ as \begin{equation} \label{eq:KG4phi-ODE} \begin{aligned} ({}^{(A)} \Box - 1) \varphi = &- \frac{1}{\tau^{2}} {}^{(A)}\bfD_{\tau}( \tau^{2} {}^{(A)}\bfD_{\tau} \varphi) - \varphi + \triangle_{A, \mathcal H_{\tau}} \varphi \\ = & \frac{1}{t} \Big[ - {}^{(A)}\bfD_{\tau}^{2} (t \varphi) - t \varphi + O\Big( \frac{\epsilon^{3}}{t^{1+}} \Big) \Big] \end{aligned} \end{equation} where $\tau = \sqrt{t^{2} - r^{2}}$ and $\triangle_{A, \mathcal H_{\tau}}$ is the covariant on constant $\tau$-hypersurfaces; see Section~\ref{subsec:ODE} for more details. The last equality can be justified just using the weak decay bounds \eqref{eq:weak-decay}. By \eqref{eq:weak-decay}, \eqref{eq:KG4phi} and \eqref{eq:KG4phi-ODE}, it follows that \begin{equation*} {}^{(A)}\bfD_{\tau}^{2} (t \varphi) + t \varphi = O\Big( \frac{\epsilon^{3}}{t^{1+}} \Big) \end{equation*} which may be viewed as \emph{gauge covariant ODE} for $t \varphi$. Multiplying by $\overline{{}^{(A)}\bfD_{\tau} (t \varphi)}$ and integrating in $\tau$, we recover the sharp decay rate \eqref{eq:sharp-decay} in terms of the initial data, which allows us to close the whole proof. We note that such an ODE technique has been used effectively in non-gauge covariant setting to handle nonlinear Klein--Gordon equations exhibiting modified scattering; see, for instance, \cite{MR2188297}. We also the mention the work \cite{MR2056833}, where a similar ODE technique was used. \subsubsection*{Squaring the covariant Dirac equation} Finally, we remark that it is possible to treat Chern--Simon--Dirac with mass on the same footing as Chern--Simons--Higgs, using the well-known fact that squaring the (covariant) Dirac operator leads to a (covariant) Klein--Gordon operator. The lower order terms turn out to be cubic in $\psi$, which is acceptable; see Section~\ref{subsec:uni} for more details. We remark that the same observation was used by Psarelli \cite{MR2131047} to treat the small data global existence problem for the massive Maxwell--Dirac equation in $\mathbb R^{1+3}$ essentially in the same fashion as the massive Maxwell--Klein--Gordon equation in the same spacetime. \subsection{History of the problem and related results} \label{subsec:history} The relativistic Chern--Simons model in ${ \mathbb{R} }^{1+2}$ was first suggested by Hong--Kim--Pac \cite{hong:1990} and Jackiw--Weinberg \cite{jackiw:1990} to study vortex solutions of the Abelian Higgs model carrying both electric and magnetic charges. When the potential in the Lagrangian is self-dual (Example \ref{ex:a-CSH}), the minimum of energy is saturated if and only if $(A, \varphi)$ satisfies a simpler system of first order equations called the self-dual equations, or the Bogomol'nyi equations. The self-dual equations can be further reduced to a single elliptic equation by the Jaffe--Taubes reduction \cite{jaffe:1980}. According to boundary conditions $|\varphi| \to 0$ or $|\varphi|\to 1$ at infinity, the solutions are called topological or non-topological, respectively. The topological solution was constructed earlier by Wang \cite{wang:1991}. The general multi-vortex non-topological solution was later constructed by Chae--Imanuvilov \cite{chae:2000}. The relativistic non-abelian Chern--Simons model was proposed by Kao and Lee \cite{kao:1994}, and Dunne \cite{du1, du2}. The supersymmetric Chern--Simons model was discussed in \cite{gu1, gu2, lo}. Topological solutions were constructed by Yang \cite{yang:1997}. The existence of non-topological solutions was obtained very recently and the general theory is still limited. For the recent developments we refer to \cite{ao:2014, lin:2013, huang:2014, choe:2015}. Most of the known results consider $\mathcal B = SU(3)$ as the gauge group. In recent years, the initial value problem for relativistic Chern--Simons theories has been studied by many authors. Most of the work in the literature (to the best of our knowledge) concern well-posedness of such equations under a certain gauge condition. The most investigated case so far is the abelian Chern--Simons--Higgs equation (Example~\ref{ex:a-CSH}). This equation is \emph{energy subcritical}: After neglecting the lower order linear and cubic terms in the potential, the scaling critical Sobolev space is $(\phi, \partial_{t} \phi) \in \dot{H}_{x}^{1/2} \times \dot{H}_{x}^{-1/2}$, whereas the energy (essentially) controls the $\dot{H}_{x}^{1} \times L_{x}^{2}$ norm. Global well-posedness of the IVP with sufficiently smooth initial data was proved by Chae--Choe \cite{Chae:2002eu} in the Coulomb gauge $\partial_{1} A_{1} + \partial_{2} A_{2} = 0$, by combining higher order energy estimate with the Br\'ezis--Gallouet inequality \cite{MR582536}. Afterwards, building on the work of Huh \cite{MR2274820, MR2812958} and Bournaveas \cite{MR2539222} on low regularity local well-posedness, global well-posedness for arbitrary finite energy data was established by Selberg--Tesfahun \cite{Selberg:2012vb} in the Lorenz gauge $- \partial_{0} A_{0} + \partial_{1} A_{1} + \partial_{2} A_{2} = 0$. The regularity condition for local well-posedness has been subsequently improved in various gauges (Lorenz, Coulomb and temporal $A_{0} = 0$) by various authors \cite{Oh:2012uq, Oh:2013bq, Pecher:2014pd, Pecher:2015yg}. The local well-posedness theory for the abelian Chern--Simons--Dirac equation (Example \ref{ex:a-CSD}) parallels that of the abelian Chern--Simons--Higgs equation; see \cite{MR2290338, Oh:2012uq, MR3163407, Bournaveas:2013vk, Pecher:2014ul}. We note however that the scaling critical Sobolev space for this equation is $\psi \in L_{x}^{2}$, which coincides with the only known coercive conserved quantity of the equation (charge). Consequently, large data global well-posedness is far more difficult to establish in the Dirac case compared to the Higgs case, and remains a major open problem. The IVP for relativistic Chern--Simons equations with general non-abelian gauge groups has not been addressed much in the literature. In the small data case, the local well-posedness theory in the abelian case extends without much difficulty. However, new issues arise when considering data of arbitrary size. For instance, the classical result of Uhlenbeck (see Proposition~\ref{prop:id-temporal}) on the existence of a regular gauge transformation into the Coulomb gauge requires a certain smallness condition, which makes the existing proofs of global well-posedness of the abelian Chern--Simons--Higgs equation fail in the non-abelian case. Nevertheless, in the forthcoming work of the second author, global well-posedness for any finite energy data is proved for the Chern--Simons--Higgs equation with general non-abelian gauge groups, using the Yang--Mills heat flow gauge introduced in \cite{MR3190112, MR3357182}. Finally, we mention the recent development concerning a non-relativistic version of Chern--Simons theory, namely the (abelian) \emph{Chern--Simons--Schr\"odinger} equation. This equation is critical with respect to the conserved mass, i.e., the $L^{2}$-norm of the Schr\"odinger field. After the initial work of Berg\'e--de~Bouard--Saut \cite{MR1328596}, local well-posedness for data small in $H^{s}$ for any $s > 0$ was established in the interesting work of Liu--Smith--Tataru \cite{MR3286341} using the heat gauge $A_{0} = \partial_{1} A_{1} + \partial_{2} A_{2}$. We are aware of two works on Chern--Simons--Schr\"odinger equation on the global in time behavior of the solutions. One is the recent work of Liu--Smith \cite{Liu:2013xr}, where large data global well-posedness and scattering for subthreshold mass was established under equivariance symmetry. Another is the work \cite{Oh:2013tx} of the second author with Pusateri, where analogue of the main theorems of this paper (i.e., global existence and optimal pointwise decay rate of the solution with small localized data) was established for Chern--Simons--Schr\"odinger without any symmetry assumptions. In fact, by revealing a new genuinely cubic null structure of the Chern--Simons--Sch\"odinger equation in the Coulomb gauge, it was furthermore proved in \cite{Oh:2013tx} that the solutions scatter to free waves in this gauge. At the moment, scattering to free waves in any gauge is open for \eqref{eq:CSH} and \eqref{eq:CSD}. \subsection{Structure of the paper} \label{subsec:outline} This paper is structured as follows. \begin{itemize} \item In Section~\ref{sec:setup}, the basic geometric setup (e.g., tensor notation, vector bundles, exterior differential calculus, Killing vector fields etc.) is given. \item Next, in Section~\ref{sec:reduction}, preliminary reductions of the main theorems are performed. For instance, a unified system of equations \eqref{eq:CS-uni} is introduced, which allows us to treat \eqref{eq:CSH} and \eqref{eq:CSD} concurrently. By the end of this section, the proof of the main theorems is reduced to showing the main a priori estimates, Proposition~\ref{prop:main}. \item In Section~\ref{sec:covVF}, the main analytic tools of the paper are presented, including a gauge covariant vector field method (energy inequality and Klainerman--Sobolev inequality). Also introduced are a gauge covariant ODE argument for establishing the sharp decay rate, and gauge invariant Gagliardo--Nirenberg inequalities. \item Section~\ref{sec:comm} is the algebraic heart of the paper; we use the formalism of exterior differential calculus for vector-valued forms to derive the commutation properties of the Chern--Simons systems with respect to the Killing vector fields $Z_{\mu \nu}$. \item Finally, in Section~\ref{sec:BA}, we use the tools developed in Sections~\ref{sec:covVF} and \ref{sec:comm} to establish Proposition~\ref{prop:main}, thereby completing the proof of the main theorems. \item In Appendix~\ref{app:gauge}, we record the reduced systems in the temporal and Cronstr\"om gauges. These computations are used in Section~\ref{sec:reduction}. \end{itemize} \section{Geometric setup and notation} \label{sec:setup} In this section we provide the basic geometric setup used in this paper. We also take this opportunity to fix the notation and conventions. \subsection{Tensor notation} In this paper, we mostly use the invariant notation for tensors. All tensor products, unless otherwise specified, are taken over $\mathbb R$. The metric dual 1-form of a vector field $X$ will be denoted $X^{\flat}$, and the metric dual $k$-contravariant tensor of a $k$-covariant tensor $T$ will be denoted by $T_{\sharp}$. The Levi-Civita connection associated to the Minkowski metric $\eta$ will be denoted by $\nabla$. This connection is trivial (i.e., has vanishing Christoffel symbols) in the rectilinear coordinates $(t = x^{0}, x^{1}, x^{2})$. Greek indices (e.g., $\mu, \nu$) run over $0,1,2$, and are used either to indicate tensor components in the rectilinear coordinates $(t=x^{0}, x^{1}, x^{2})$ or to parametrize Killing vector fields on $\mathbb R^{1+2}$; see Subsection \ref{subsec:KillingVF} below. We employ the Einstein summation convention of summing up repeated upper and lower indices. Furthermore, indices are raised or lowered using the metric $\eta$, e.g., $T^{\mu} = (\eta^{-1})^{\mu \nu} T_{\nu}$. Sometimes it will be convenient to employ the \emph{abstract index notation}, which we now briefly explain. The abstract indices $a, b, c, \ldots$ are not numbers (like $\mu, \nu = 0, 1, 2$), but rather placeholders which indicates the type of a tensor. For example, a vector field is written as $X^{a}$ and a $k$-covariant tensor is denoted by $T_{a_{1} \cdots a_{k}}$. Contraction is indicated by repeated upper and lower abstract indices as in the Einstein summation convention, e.g., $T(X, Y) = T_{ab} X^{a} Y^{b}$ for a 2-covariant tensor $T$ and vector fields $X$ and $Y$. This elegant representation of the contraction operation is a key advantage of the abstract index notation. Finally, abstract indices are raised and lowered using the metric $\eta$. When applied to all indices of a vector field $X^{a}$ or a $k$-covariant tensor $T_{a_{1} \cdots a_{k}}$, this is equivalent to taking their respective metric dual, i.e., $X_{a} = X^{\flat}_{a}$ and $T^{a_{1} \cdots a_{k}} = T_{\sharp}^{a_{1} \cdots a_{k}}$. \subsection{Vector bundles and gauge structure of \eqref{eq:CSH} and \eqref{eq:CSD}} \label{subsec:gauge-str} The proper way to describe gauge theories is to use the language of vector bundles. For a general introduction to the theory of vector bundles, we refer to \cite{Kobayashi:1963uh, Kobayashi:1969ub}. In this paper, we only need to consider the trivial $V$-bundle $E := \mathbb R^{1+2} \times V$ for a complex vector space $V$, equipped with a metric $\brk{\cdot, \cdot}_{V}$, as well as the restricted bundles on subsets $\mathcal O \subseteq \mathbb R^{1+2}$. For simplicity, we will often omit the subscript $V$ and write $\brk{\cdot, \cdot} = \brk{\cdot, \cdot}_{V}$. The sections of $E$ may be identified with the $V$-valued functions on $\mathbb R^{1+2}$ by the following procedure. Take a global orthonormal frame field $\set{\Theta_{\mathfrak{a}}}$, i.e., $\dim V$-many sections which form an orthonormal basis with respect to $\brk{\cdot, \cdot}$ at every point; it exists thanks to the triviality of $E$. Then identifying the frame $\set{\Theta_{\mathfrak{a}}(p)}$ at each point $p \in \mathbb R^{1+2}$ with a fixed basis $\set{\theta_{\mathfrak{a}}}$ of $V$, we obtain the desired identification. Note that this procedure works equally well for any vector bundle equipped with a real or complex inner product (e.g., the adjoint $\mathfrak{g}$-bundle) on any contractible subset of $\mathbb R^{1+2}$. In this paper, this identification is freely used. A $\mathfrak{G}$-valued function $U$ acts naturally (on the left) on a $V$-valued function $\phi$ by the pointwise action, i.e,. $(U \cdot \phi) (p) = U(p) \cdot \phi(p)$. Geometrically, this corresponds to a change of frame at each point $p$ by an appropriate action (on the right) of $U(p)$. We call $U$ a \emph{gauge transformation}, and $U \cdot \phi$ the \emph{gauge transform} of $\phi$ by $U$. Given a section $\phi$ of $E$, realized as a $V$-valued function, a gauge covariant derivative ${}^{(A)}\bfD$ of $\phi$ can be written in reference to $\nabla$ as in \eqref{eq:covd-def}; it is characterized by a $\mathfrak{g}$-valued 1-form $A$, called the corresponding \emph{connection 1-form}. The commutator of two gauge covariant derivatives leads to the curvature 2-form $F$ by \eqref{eq:curv-def}. Under a gauge transformation $U$, the connection 1-form $A$ and the curvature 2-form $F$ transform under the rules \begin{equation} \label{eq:gt} A \mapsto U A U^{-1} - (\mathrm{d} U) U^{-1}, \quad F \mapsto U F U^{-1}. \end{equation} As a consequence, note that $F$ takes values in the adjoint $\mathfrak{g}$-bundle, whereas $A$ does not. Let $A$ be any connection 1-form. As the representation $\rho$ is unitary, $\mathfrak{g}$ acts on $V$ by anti-hermitian operators; hence we have the following Leibniz rule for $V$-valued functions: \begin{equation} \label{eq:leibniz-V} \nabla \brk{\phi^{1}, \phi^{2}} = \brk{{}^{(A)}\bfD \phi^{1}, \phi^{2}} + \brk{\phi^{1}, {}^{(A)}\bfD \phi^{2}}. \end{equation} Similarly, by the bi-invariance of $\brk{\cdot, \cdot}_{\mathfrak{g}}$, we have \begin{equation} \label{eq:leibniz-g} \nabla \brk{a^{1}, a^{2}}_{\mathfrak{g}} = \brk{{}^{(A)}\bfD a^{1}, a^{2}}_{\mathfrak{g}} + \brk{a^{1}, {}^{(A)}\bfD a^{2}}_{\mathfrak{g}} \end{equation} for $\mathfrak{g}$-valued functions $a^{1}, a^{2}$. Finally, if we define ${}^{(A)}\bfD a$ (where $a$ is $\mathfrak{g}$-valued) by the adjoint action (i.e., Lie bracket), then \begin{equation} \label{eq:leibniz-gV-0} {}^{(A)}\bfD (a \cdot \phi) = ({}^{(A)}\bfD a) \cdot \phi + a \cdot {}^{(A)}\bfD \phi. \end{equation} where $a$ and $\phi$ are $\mathfrak{g}$- and $V$-valued functions, respectively. \subsection{Exterior differential calculus} \label{subsec:extr-calc} We now introduce basic operations of the exterior differential calculus, which will be our main tool for computing commutation relations. A standard reference is \cite[Chapter 1]{Kobayashi:1963uh}. Our notation is as follows: $\wedge$ denotes the wedge product, $\mathrm{d}$ is the exterior derivative and $\iota_{X}$ is the interior product\footnote{Our convention is that the contraction takes place in the left-most slot, i.e., $(\iota_{X} \omega)(Y_{1}, \ldots, Y_{k-1})= \omega(X, Y_{1}, \ldots, Y_{k-1})$. } with a vector field $X$. The Lie derivative with respect to $X$ will be denoted by $\calL_{X}$. This operation makes sense for any tensor field; in particular, one has $\calL_{X} f = X f$ for a function $f$ and $\calL_{X} Y = [X, Y]$ for a vector field $Y$. We also need to develop the exterior differential calculus of vector- and Lie algebra-valued forms. Let $V$ be a complex vector space, equipped with an inner product $\brk{\cdot, \cdot}_{V}$. Consider also the Lie algebra $\mathfrak{g}$ associated with $\mathfrak{G}$, whose action on $V$ is denoted by $a \cdot v$ $(a \in \mathfrak{g}, v \in V)$. When $V = \mathfrak{g}$, we let $\mathfrak{g}$ act by the adjoint action, i.e., $a \cdot v = \LieBr{a}{v}$. A \emph{$V$-valued $k$-form at a point $p \in \mathbb R^{1+2}$} is a totally anti-symmetric multilinear form that takes in $k$ tangent vectors at $p$ and gives an element of $V$. A (smooth) \emph{$V$-valued $k$-form} on an open subset $\mathcal U$ of $\mathbb R^{1+2}$ is a (smooth) association of points $p$ with a $V$-valued $k$-form at $p$. A $\mathfrak{g}$-valued $k$-form is defined similarly. In order to distinguish from these objects, the usual $k$-forms on $\mathbb R^{1+2}$ will be referred to as being \emph{real-valued}. Any $V$-valued $k$-form can be decomposed to a linear combination of tensor products of the form $\phi \otimes \omega$, where $\phi$ is a $V$-valued function and $\omega$ is a real-valued $k$-form. The operations $\mathrm{d}$ and $\iota_{X}$ are naturally extended (component-wisely) to $V$- and $\mathfrak{g}$-valued $k$-forms, as well as the wedge product $v \wedge \omega$ of a $V$-valued $k$-form $v$ and a real-valued $\ell$-form $\omega$. On the other hand, we define the wedge product $a \wedge v$ of a $\mathfrak{g}$-valued $k$-form $a$ and a $V$-valued $\ell$-form $v$ using the action (on the left) of $\mathfrak{g}$ on $V$. This product is characterized by the relation \begin{equation*} (b \otimes \omega^{1}) \wedge (\phi \otimes \omega^{2}) = (b \cdot \phi) \omega^{1} \wedge \omega^{2} \end{equation*} for a $\mathfrak{g}$-valued function $b$, a $V$-valued function $\phi$ and real-valued differential forms $\omega^{1}, \omega^{2}$. In particular, when $V = \mathfrak{g}$, the wedge product of two $\mathfrak{g}$-valued forms $a, b$ is defined using the adjoint action, or the Lie bracket; for this reason, we use the notation $[a \wedge b]$ for this product. Throughout the paper, the following convention is in effect: \begin{convention} Unless otherwise specified by parentheses, wedge products are understood to be taken from the right to the left. \end{convention} Note that, due to the lack of associativity of the Lie bracket, the wedge product of $\mathfrak{g}$-valued forms generally \emph{fails} to be associative, i.e., we have $[[a \wedge b] \wedge c] \neq [a \wedge [b \wedge c]]$ for $\mathfrak{g}$-valued forms $a, b, c$ in general. Similarly, for a $V$-valued form $v$, in general we have $[a \wedge b] \wedge v \neq a \wedge (b \wedge v)$. Given a connection 1-form $A$, we define the \emph{gauge covariant exterior derivative} ${}^{(A)}\ud$ of a $V$-valued $k$-form $v$ to be \begin{equation} \label{eq:covud} {}^{(A)}\ud v = \mathrm{d} v + A \wedge v. \end{equation} Furthermore, the \emph{gauge covariant Lie derivative} is defined as \begin{equation} \label{eq:covLD} {}^{(A)}\calL_{X} v = \mathcal L_{X} v + (\iota_{X} A) v. \end{equation} Observe that for $V$-valued functions, both definitions coincide with the gauge covariant derivative, i.e., ${}^{(A)}\ud_{A} \phi (X) = {}^{(A)}\calL_{X} \phi = {}^{(A)}\bfD_{X} \phi$. The \emph{Hodge star operator} associated to $\eta$ is denoted by $\star$. This operator linearly maps a real-valued $k$-form ($k=0,1,2,3$) to a real-valued $(3-k)$-form, and is characterized by the relation \begin{equation} \label{eq:star-def} \omega^{1} \wedge \star \omega^{2} = \eta^{-1}(\omega^{1}, \omega^{2}) \epsilon, \end{equation} where $\epsilon = \mathrm{d} x^{0} \wedge \mathrm{d} x^{1} \wedge \mathrm{d} x^{2}$ is the volume form on $\mathbb R^{1+2}$ and $\eta^{-1}(\cdot, \cdot)$ is the induced Minkowski metric\footnote{The induced Minkowski metric for real-valued $k$-forms is defined so that given any orthonormal set of 1-forms $\set{e^{0}, e^{1}, e^{2}}$, $\set{e^{i_{1}} \wedge \cdots \wedge e^{i_{k}} : i_{1}, \ldots, i_{k} = 0, 1,2}$ is orthonormal.} on real-valued $k$-forms. This definition naturally extends to $V$- and $\mathfrak{g}$-valued differential forms componentwisely. Equivalently, the Hodge star operator $\star$ on a $V$- [resp. $\mathfrak{g}$-]valued $k$-form is characterized by the relation \begin{equation*} \star (\phi \otimes \omega) = \phi \otimes \star \omega \end{equation*} for $\phi \in V$ [resp. $\phi \in \mathfrak{g}$] and a $k$-form $\omega$. In order to measure the size of real-valued forms, we use the auxiliary Euclidean metric $(\mathrm{d} x^{0})^{2} + (\mathrm{d} x^{1})^{2} + (\mathrm{d} x^{2})^{2}$, which has the benefit of being parallel. Hence for a real-valued $k$-form $\omega$, we define \begin{equation*} \abs{\omega}^{2} = \sum_{\mu_{1} < \cdots < \mu_{k}} \abs{\omega(T_{\mu_{1}}, \ldots, T_{\mu_{k}})}^{2} , \end{equation*} where $T_{\mu} = \partial_{\mu}$ in the rectilinear coordinates. The norm of a $V$- or $\mathfrak{g}$-valued $k$-form is defined similarly, using in addition $\brk{\cdot, \cdot}_{V}$ or $\brk{\cdot, \cdot}_{\mathfrak{g}}$, respectively. Given a $V$-valued $k$-form $v$ on $\mathbb R^{1+2}$ and an embedded submanifold $\Sigma \subset \mathbb R^{1+2}$ of $\mathbb R^{1+2}$, we denote by $v \restriction_{\Sigma}$ the \emph{pullback} of $v$ along the inclusion map $\iota: \Sigma \hookrightarrow \mathbb R^{1+2}$, which is a $V$-valued $k$-form on $\Sigma$ characterized by \begin{equation*} v \restriction_{\Sigma}(X_{1}, \ldots, X_{k}) = v(\mathrm{d} \iota (X_{1}), \ldots, \mathrm{d} \iota (X_{k})) \end{equation*} for every $p \in \Sigma$ and $X_{1}, \ldots, X_{k} \in T_{p} \Sigma$, where $\mathrm{d} \iota : T_{p} \Sigma \to T_{\iota(p)} \mathbb R^{1+2}$ is the differential of the map $\iota$ at $p$. In particular, if $v$ is a $V$-valued $0$-form (i.e., a $V$-valued function) then $v \restriction_{\Sigma}$ is simply the restriction of $v$ to $\Sigma$. For further formulae and results in exterior differential calculus, we refer to Section~\ref{subsec:extr-calc-2}. \subsection{Killing vector fields on $\mathbb R^{1+2}$} \label{subsec:KillingVF} A vector field on a Lorentzian (more generally, pseudo-Riemannian) manifold is said to be \emph{Killing} if it generates a one-parameter group of isometries. As is well-known, there are $6$ linearly independent Killing vector fields on $\mathbb R^{1+2}$, given in the rectilinear coordinates $(t = x^{0}, x^{1}, x^{2})$ by \begin{itemize} \item {\it Translations:} $T_{\mu} = \displaystyle{\partial_{\mu}}$ \item {\it Lorentz transforms and rotations:} $Z_{\mu \nu} = x_{\mu} \partial_{\nu} - x_{\nu} \partial_{\mu}$ \end{itemize} where $\mu, \nu = 0, 1, 2$ and $x_{\mu} = \eta_{\mu \lambda} x^{\lambda}$. These vector fields commute with the Klein-Gordon operator $\Box - 1$. We also define the scaling vector field \begin{equation*} S = x^{\mu} \partial_{\mu} , \end{equation*} which is \emph{not} a Killing vector field. It is a conformal Killing vector field, but it does not satisfy a good commutation relation with respect to $\Box - 1$. The span of the vector fields $T_{\mu}$, $Z_{\mu \nu}$ and $S$ form a Lie algebra under the natural commutation operation. Schematically, their commutation relations are given as follows: \begin{align*} [T, T ] = & 0, & [Z, T] = & T, & [Z, Z] = & Z, \\ [T, S] = & T, & [Z, S] = & 0, & [S, S] = & 0. \end{align*} Below, we will often consider covariant derivatives of $V$-valued functions with respect to the vector fields introduced above. It will be convenient to introduce the following notation: \begin{equation*} \bfT_{\mu} := {}^{(A)}\bfD_{T_{\mu}}, \quad \bfZ_{\mu \nu} := {}^{(A)}\bfD_{Z_{\mu \nu}} = x_{\mu} \bfT_{\nu} - x_{\nu} \bfT_{\mu}, \quad \bfS = {}^{(A)}\bfD_{S} = x^{\mu} \bfT_{\mu}. \end{equation*} Furthermore, in view of \eqref{eq:normal} below, we also introduce the notation \begin{equation*} \bfN = {}^{(A)}\bfD_{N} = \tau^{-1} \bfS. \end{equation*} Note that these covariant differential operators coincide with the covariant Lie derivatives along the same vector fields, i.e., $\bfZ_{\mu \nu} \phi = {}^{(A)}\calL_{Z_{\mu \nu}} \phi$ etc. \subsection{Spherical and hyperboloidal polar coordinates} \label{subsec:polar-coords} On each constant $t$-hypersurface, we define the spherical polar coordinates $(r, \theta) \in (0, \infty) \times [0, 2 \pi)$ by \begin{equation*} x^{1} = r \cos \theta, \quad x^{2} = r \sin \theta. \end{equation*} In what follows, we will refer to this coordinate system simply as the \emph{polar coordinates}. In this coordinate system the metric takes the form \begin{equation*} \eta = - \mathrm{d} t^{2} + \mathrm{d} r^{2} + r^{2} \mathrm{d} \theta^{2}. \end{equation*} Define the function $\omega_{j}$ in the rectilinear coordinates $(t, x^{1}, x^{2})$ by \begin{equation*} \omega_{j} = \omega^{j} := \frac{x^{j}}{\sqrt{(x^{1})^{2} + (x^{2})^{2}}} \qquad (j = 1, 2) \, . \end{equation*} In the polar coordinates, we have $\omega_{1} = \cos \theta$ and $\omega_{2} = \sin \theta$. The \emph{hyperboloidal polar coordinate system} is the Minkowski analogue of the spherical polar coordinate system on Euclidean spaces. The coordinates consist of $(\tau, y, \theta) \in (0, \infty) \times (0, \infty) \times [0, 2 \pi)$, where \begin{equation*} t = \tau \cosh y, \quad r = \tau \sinh y. \end{equation*} These coordinates cover (of course, minus the axis of rotation $\set{(x^{0}, 0, 0)}$, like the standard polar coordinates) the solid future light cone \begin{equation*} \mathcal C_{0} = \set{(x^{0}, x^{1}, x^{2}): -(x^{0})^{2} + (x^{1})^{2} + (x^{2})^{2} < 0, x^{0} > 0}. \end{equation*} In this coordinate system, the metric and its inverse take the form \begin{align*} \eta =& - \mathrm{d} \tau^{2} + \tau^{2} \mathrm{d} y^{2} + \tau^{2} \sinh^{2} y \mathrm{d} \theta^{2}, \\ \eta^{-1} =& - \partial_{\tau} \otimes \partial_{\tau} + \tau^{-2} \partial_{y} \otimes \partial_{y} + \tau^{-2} (\sinh y)^{-2} \partial_{\theta} \otimes \partial_{\theta} \, . \end{align*} We denote the constant $\tau$-hypersurface by $\mathcal H_{\tau}$. Observe that the future pointing unit normal $N = n_{\mathcal H_{\tau}}$ to $\mathcal H_{\tau}$ is equal to $\partial_{\tau}$, which coincides with the vector field $\tau^{-1} S$, i.e., \begin{equation} \label{eq:normal} N = n_{\mathcal H_{\tau}} = \partial_{\tau} = \tau^{-1} S. \end{equation} The induced volume form on $\mathcal H_{\tau}$ is given by \begin{equation*} \mathrm{d} \sigma_{\mathcal H_{\tau}} = \tau^{2} \cosh y \, \mathrm{d} y \mathrm{d} \theta. \end{equation*} The hyperboloidal polar coordinates are useful since they are Lorentz-invariant, i.e., the vector fields $Z_{\mu \nu}$ are tangent to $\mathcal H_{\tau}$. Indeed, partial derivatives in the hyperboloidal polar coordinate system are related to $Z_{\mu \nu}$ by \begin{align} \partial_{\theta} = & Z_{12}, \label{eq:dTht} \\ \partial_{y} = & - (\omega_{1} Z_{01} + \omega_{2} Z_{02}). \label{eq:dY} \end{align} We also note that \begin{align} \frac{\cosh y}{\sinh y} \partial_{\theta} =& -( \omega_{1} Z_{02} - \omega_{2} Z_{01}). \label{eq:wdTht} \end{align} which is favorable in the region $\set{r \leq t}$. Inverting the linear system consisting of \eqref{eq:dY} and \eqref{eq:wdTht}, $Z_{01}$ and $Z_{02}$ may be written in terms of $\partial_{y}$ and $\partial_{\theta}$ as follows: \begin{align} Z_{01} =& \omega_{1} \partial_{y} + \omega_{2} \frac{\cosh y}{\sinh y} \partial_{\theta} \label{eq:Z01} \\ Z_{02} =& \omega_{2} \partial_{y} - \omega_{1} \frac{\cosh y}{\sinh y} \partial_{\theta}. \label{eq:Z02} \end{align} \subsection{Notation for spacetime regions} Given $R \in \mathbb R$, we denote by $\mathcal C_{R}$ the solid future light cone with its tip at $(R, 0, 0)$, i.e., \begin{equation*} \mathcal C_{R} = \set{(x^{0}, x^{1}, x^{2}) \in \mathbb R^{1+2} : - (x^{0}-R)^{2} + (x^{1})^{2} + (x^{2})^{2} < 0, \, x^{0} > R}. \end{equation*} As discussed above, the cone $\mathcal C_{0}$ admits a foliation by the hyperboloids \begin{equation*} \mathcal H_{\tau} = \set{\tau = const} = \set{(x^{0}, x^{1}, x^{2}) \in \mathbb R^{1+2} : - (x^{0})^{2} + (x^{1})^{2} + (x^{2})^{2} = - \tau^{2}, x^{0} > 0}. \end{equation*} for $\tau > 0$, i.e., $\mathcal C_{0} = \cup_{\tau > 0} \mathcal H_{\tau}$. For $R > 0$, we denote by $B_{R}(x_{0})$ the open ball in $\mathbb R^{2}$ of radius $R$ and centered at $x_{0}$, i.e., \begin{equation*} B_{R}(x_{0}) = \set{(x^{1}, x^{2}) \in \mathbb R^{2} : (x^{1} - x_{0}^{1})^{2} + (x^{2} - x_{0}^{2})^{2} < R^{2}}. \end{equation*} In the case $x_{0} = 0$, we will often omit $x_{0}$ and simply write $B_{R} = B_{R}(0)$. \subsection{Norms and other conventions} We use the standard notation $L^{p}$, $W^{k, p}$ and $H^{k}$ for the Lebesgue, $L^{p}$- and $L^{2}$-based Sobolev spaces of order $k$, respectively. Furthermore, we also introduce a weighted norm on $\mathcal H_{\tau}$: \begin{equation} \label{eq:nrm-def} \wnrm{\phi}_{L^{p}_{\tau}} := \nrm{\phi}_{L^{p}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})}. \end{equation} This norm arises naturally from the energy inequality; see Section~\ref{subsec:energy}. In this paper, complicated formulae would often be simplified to its \emph{schematic} form; see for instance Propositions~\ref{prop:comm-covKG}, \ref{prop:comm-J-CSH} and \ref{prop:comm-J-CSD} below. By a \emph{schematic formula} of the form \begin{equation*} (\hbox{LHS}) = \sum_{k} B_{k}, \end{equation*} we mean precisely that the left-hand side is equal to the linear combination of terms of the right-hand side, i.e., there exist constants $c_{k}$ such that $(\hbox{LHS}) = \sum_{k} c_{k} B_{k}$. \section{Reduction to the main a priori estimate} \label{sec:reduction} The goal of this section is to reduce the proof of the main theorems (Theorems~\ref{thm:CSH} and \ref{thm:CSD}) to establishing a priori estimates for a unified system \eqref{eq:CS-uni} encompassing both \eqref{eq:CSH} and \eqref{eq:CSD}; see Proposition~\ref{prop:main} below. In order to simplify the exposition, the following convention will be in effect for the remainder of the paper: \begin{convention} The non-zero parameters $\kappa, v$ and $m$ in \eqref{eq:CSH} and \eqref{eq:CSD} are normalized to 1, i.e., $\kappa = v = m = 1$. \end{convention} Our analysis can be adapted in an obvious fashion to the general parameters, as long as they are non-zero. \subsection{Unified system for \eqref{eq:CSH} and \eqref{eq:CSD}} \label{subsec:uni} In this subsection, we first show how \eqref{eq:CSD} can be reduced to a covariant Klein--Gordon equation by squaring the Dirac equation. Building on this reduction, we then introduce a single system that allows for a unified treatment of \eqref{eq:CSH} and \eqref{eq:CSD}; see \eqref{eq:CS-uni} below. Consider the covariant Dirac equation in \eqref{eq:CSD}, i.e., \begin{equation*} (i {}^{(A)} \! \not \!\! \bfD + m) \psi = 0. \end{equation*} Applying the covariant Dirac operator $i {}^{(A)} \! \not \!\! \bfD$ and using \eqref{eq:gmm-mat}, we obtain the following covariant Klein-Gordon equation with mass $m^{2}$ for $\psi$: \begin{equation} \label{eq:CSD:KG} {}^{(A)} \Box \psi - m^{2} \psi = \frac{1}{2} \gamma^{\mu} \gamma^{\nu} F(T_{\mu}, T_{\nu}) \cdot \psi. \end{equation} In what follows, we will work exclusively with the equation \eqref{eq:CSD:KG}. In particular, we may forget the spinorial structure of $\psi$, and consider $\psi$ simply as a $V$-valued function on $\mathbb R^{1+2}$. Similarly, we view \begin{equation*} \gamma = \eta_{\mu \nu} \gamma^{\mu} \mathrm{d} x^{\nu}, \quad \alpha = \gamma^{0} \gamma = \eta_{\mu \nu} \alpha^{\mu} \mathrm{d} x^{\nu} \end{equation*} as $2 \times 2$ matrix-valued 1-forms on $\mathbb R^{1+2}$. These observations allow us to treat \eqref{eq:CSD} on the same footing as \eqref{eq:CSH}, despite its original spinorial nature. We now introduce a unified system of equations that subsumes both \eqref{eq:CSH} and \eqref{eq:CSD}. Let $V$ be a complex vector space with inner product $\brk{\cdot, \cdot}$, with an additional structure $V = \Delta \otimes_{\mathbb C} W$ in the case of \eqref{eq:CSD}. Let $\phi$ be a $V$-valued function on $\mathbb R^{1+2}$, which represents \begin{equation*} \phi = \left\{ \begin{array}{cl} \varphi & \hbox{ for } \eqref{eq:CSH} \\ \psi & \hbox{ for } \eqref{eq:CSD}. \end{array} \right. \end{equation*} Let $\mathfrak{G}$ be a Lie group with a positive-definite bi-invariant metric $\brk{\cdot, \cdot}_{\mathfrak{g}}$, which acts on $V$ as described in Sections~\ref{subsec:CSH} and \ref{subsec:CSD}, and let $A$ be a connection 1-form on $\mathbb R^{1+2}$. The \emph{unified Chern--Simons system} is given by \begin{equation} \label{eq:CS-uni} \left\{ \begin{aligned} ({}^{(A)} \Box - 1) \phi = & U(\phi) \\ F =& \star J(\phi). \end{aligned} \right. \end{equation} For \eqref{eq:CSH} and \eqref{eq:CSD}, $J(\phi)$ equals (respectively) \begin{align*} J_{\mathrm{CSH}}(\varphi) =& \brk{\mathcal T \varphi, {}^{(A)}\ud \varphi} + \brk{{}^{(A)}\ud \varphi, \mathcal T \varphi}, \\ J_{\mathrm{CSD}}(\psi) =& \brk{\mathcal T \psi, i \alpha \psi} . \end{align*} In the case of \eqref{eq:CSH}, the $V$-valued potential $U(\phi) = U_{\mathrm{CSH}} (\varphi)$ takes the form \begin{equation*} U_{\mathrm{CSH}}(\varphi) = 4 \LieBr{\varphi}{\LieBr{\varphi}{\varphi^{\dagger}}} + 2 \LieBr{\LieBr{\varphi}{\LieBr{\varphi^{\dagger}}{\LieBr{\varphi}{\varphi^{\dagger}}}}}{\varphi} + \LieBr{\LieBr{\varphi}{\LieBr{\varphi}{\varphi^{\dagger}}}}{\LieBr{\varphi}{\varphi^{\dagger}}} . \end{equation*} In the case of \eqref{eq:CSD}, we have $U(\phi) = U_{\mathrm{CSD}}(\psi)$ with \begin{equation} \label{eq:U-CSD} U_{\mathrm{CSD}} (\psi) = \frac{1}{2} \epsilon_{\mu \nu \lambda} \gamma^{\mu} \gamma^{\nu} (J_{\mathrm{CSD}}^{\lambda}(\psi) \cdot \psi). \end{equation} Here $\epsilon_{\mu \nu \lambda} = \epsilon(T_{\mu}, T_{\nu}, T_{\lambda})$. In what follows, we will refer to the first equation of \eqref{eq:CS-uni} and the \emph{covariant Klein--Gordon equation}, and the second equation as the \emph{Chern--Simons equation}. Let $\Sigma \subset \mathbb R^{1+2}$ be a spacelike hypersurface with a future directed unit normal vector field $n_{\Sigma}$. For instance, $(\Sigma, n_{\Sigma}) = (\Sigma_{t_{0}}, T_{0})$ or $(\mathcal H_{\tau_{0}}, N)$. The data on $\Sigma$ for a solution $(A, \phi)$ to \eqref{eq:CSH} consist of a triple $(a, f, g)$ of a $\mathfrak{g}$-valued 1-form and $V$-valued functions $f, g$ on $\Sigma$, such that \begin{equation} \label{eq:CS-uni-id} (a, f, g) =(A, \phi, {}^{(A)}\bfD_{n_{\Sigma}} \phi) \restriction_{\Sigma}. \end{equation} As a consequence of the equation \eqref{eq:CS-uni}, such a triple $(a, f, g)$ obeys the constraint equation \begin{equation} \label{eq:CS-uni-const} \mathrm{d} a + \frac{1}{2} [a \wedge a] = \star J \restriction_{\Sigma} \end{equation} Accordingly, we say that $(a, f, g)$ is an \emph{initial data set} for \eqref{eq:CS-uni} if it solves \eqref{eq:CS-uni-const} with $(\phi, {}^{(A)}\bfD_{n_{\Sigma}} \phi) \restriction_{\Sigma} = (f, g)$ on the right-hand side. Note that an initial data set $(a, f, g)$ for \eqref{eq:CSH} is also an initial data set for \eqref{eq:CS-uni}, whereas an initial data set $(a, \psi_{0})$ of \eqref{eq:CSD} gives rise to an initial data set $(a, f, g)$ for \eqref{eq:CS-uni}, where $f = \psi_{0}$ and $g$ is computed from the Dirac equation $i {}^{(A)} \! \not \!\! \bfD \psi + m \psi = 0$. \subsection{Solving up to the initial hyperboloid} Since both \eqref{eq:CSH} and \eqref{eq:CSD} are time reversible, it suffices to prove Theorems~\ref{thm:CSH} and \ref{thm:CSD} just in the future time direction. As is usual in the vector field method for a Klein--Gordon equation \cite{MR803252}, the main part of our analysis takes place in the hyperboloidal foliation $\set{\mathcal H_{\tau}}_{\tau > 0}$. To connect this analysis with the Cauchy problem for the foliation $\set{\Sigma_{t}}_{t \in \mathbb R}$, we first apply a time translation to place the initial data on $\Sigma_{2R}$, where we remind the reader that $R$ measures the radius of the support of the initial data. Then we use the following result, which passes from initial data posed on $\Sigma_{2R}$ to a solution up to $\mathcal H_{2R}$. \begin{figure} \caption{The initial time slice and the initial hyperboloid} \label{fig:initial-hyp} \end{figure} \begin{proposition}[Solution up to the initial hyperboloid] \label{prop:initial-hyp} There exists $\delta_{\ast\ast} = \delta_{\ast \ast}(R) > 0$ such that the following statements hold. Let $(a, f, g)$ be a smooth \eqref{eq:CSH} initial set obeying \eqref{eq:CSH-id} [resp. a smooth \eqref{eq:CSD} initial data set obeying \eqref{eq:CSD-id}], and consider the IVP for \eqref{eq:CSH} [resp. \eqref{eq:CSD}] with data on $\Sigma_{2R} = \set{t = 2R}$. If $\epsilon \leq \delta_{\ast \ast}(R)$, then there exists a smooth solution $(A, \phi)$ to the IVP on the spacetime region (see Figure~\ref{fig:initial-hyp}) \begin{equation*} \mathcal Q := \Big( \set{2R \leq t \leq \tfrac{5}{2} R} \cup \set{\tau \leq 2R} \cup \mathcal C_{R}^{c} \Big) \cap \set{t \geq 2R}. \end{equation*} which is unique up to smooth local gauge transformations. We have \begin{equation} \label{eq:initial-hyp:fsp} {\mathrm{supp}} \, \phi \subseteq \mathcal C_{R} \cap \set{t \geq 2R} = \set{(t, x) : t \geq 2R, \ \abs{x} \leq t + R}. \end{equation} Moreover, the solution $(A, \phi)$ obeys the following gauge invariant bounds: \begin{equation} \label{eq:initial-hyp:est} \sup_{t \in [2R, \frac{5}{2} R]} \sum_{k=0}^{5} \nrm{\bfT^{(k)} \phi}_{L^{2}(\Sigma_{t})} \leq C \epsilon. \end{equation} \end{proposition} The notion of uniqueness up to smooth local gauge transformations is defined as in the case of \eqref{eq:CSH}; see the discussion following Theorem~\ref{thm:CSH} above. The significance of the time $t = \frac{5}{2} R$ in the definition of $\mathcal Q$ and \eqref{eq:initial-hyp:est} is that $\mathcal H_{2R}$ intersects the cone $\set{t = R + r}$ precisely on the circle $\set{t = \frac{5}{2} R, \ r = \frac{3}{2} R}$; see Figure~\ref{fig:initial-hyp}. A result like Proposition~\ref{prop:initial-hyp} is usually a quick consequence of the local well-posedness theory in the $\set{\Sigma_{t}}_{t \in \mathbb R}$ foliation and the finite speed of propagation; for instance, see \cite{MR2056833}. In order to properly formulate these properties for \eqref{eq:CS-uni}, we must address the issue of gauge choice, which arises at two stages: First, in finding a description of initial data obeying good bounds, and second, in the unique evolution of the initial data. To find a representation of the data with nice bounds, we rely on the following result. In what follows, given an open subset $O$ of $\Sigma_{t_{0}}$ and a $V$-(or $\mathfrak{g}$-)valued $k$-form $v$, we define its Sobolev norms on $O$ by \begin{equation*} \nrm{v}_{H^{m}(O)}^{2} := \sum_{k=0}^{m} \nrm{{}^{(\Sigma_{t_{0}})}\nabla^{(k)} v}_{L^{2}(O)}^{2}, \end{equation*} where ${}^{(\Sigma_{t_{0}})} \nabla$ denotes the (induced) Levi-Civita connection on $\Sigma_{t_{0}}$. If $v$ is defined on a larger set $\mathcal O \supset O$ (possibly an open set in the spacetime), then this norm is defined using the pullback along $O \hookrightarrow \mathcal O$, i.e., $\nrm{v}_{H^{m}(O)} := \nrm{v \restriction_{O}}_{H^{m}(O)}$. \begin{proposition}\label{prop:id-temporal} Let $B := \set{t_{0}} \times B_{r_{0}}(x_{0}) \subseteq \Sigma_{t_{0}}$ and $(a, f, g)$ an initial data set for \eqref{eq:CS-uni} on $B$ satisfying the constraint equation \eqref{eq:CS-uni-const}. For a fixed integer $m \geq 1$, let \begin{equation} \label{eq:id-size-temporal} \alpha := \sum_{k=0}^{m} \nrm{{}^{(A, \Sgm_{t_{0}})}\bfD^{(k)} f}_{L^{2}(B)} + \sum_{k=0}^{m-1} \nrm{{}^{(A, \Sgm_{t_{0}})}\bfD^{(k)} g}_{L^{2}(B)}, \end{equation} where ${}^{(A, \Sgm_{t_{0}})}\bfD$ is the (induced) covariant derivative on $\Sigma_{t_{0}}$. If $\alpha < \alpha_{\ast}(r_{0})$, where $\alpha_{\ast}(r_{0})$ is some fixed positive function depending only on $r_{0}$, then there exists a smooth gauge transformation $U$ on $B$ such that the gauge-transformed potential $\tilde{a} = U a U^{-1} - \mathrm{d} U U^{-1}$ satisfies \begin{align*} \iota_{{\bf n}_{\partial B}} \tilde{a} =& 0 \quad \hbox{ on } \partial B = \set{x : \abs{x - x_{0}} = r_{0}}, \\ {}^{(\Sigma_{t_{0}})} \delta \tilde{a} = & 0 \quad \hbox{ on } B. \end{align*} Here, ${\bf n}_{\partial B}$ is the outer normal vector field on $\partial B$ tangent to $\Sigma_{t_{0}}$ and ${}^{(\Sigma_{t_{0}})} \delta$ is the exterior codifferential on $\Sigma_{t_{0}}$. Moreover, the gauge transformed initial data set $(\tilde{a}, \tilde{f} = U \cdot f, \tilde{g} = U \cdot g$) obeys the bounds \begin{align*} \nrm{\tilde{a}}_{H^{m}(B)} \leq & C(m) \alpha^{2}, \\ \nrm{\tilde{f}}_{H^{m}(B)} + \nrm{\tilde{g}}_{H^{m-1}(B)} \leq & C(m) \alpha. \end{align*} \end{proposition} In the case $m = 1$, this proposition is an immediate consequence of the classical theorem of Uhlenbeck \cite[Theorem~1.3]{Uhlenbeck:1982vna} and the explicit formula for the curvature 2-form $F[a]$ in terms of $f$ and $g$ via the constraint equation \eqref{eq:CS-uni-const}. To handle the case $m > 1$, first note that $\tilde{a}$ is a solution to the boundary value problem for the div-curl system \begin{equation*} \left\{ \begin{aligned} \mathrm{d} \tilde{a} =& F[\tilde{a}] - \frac{1}{2} [\tilde{a} \wedge \tilde{a}]\quad \hbox{ on } B \\ {}^{(\Sigma_{t_{0}})} \delta \tilde{a} =& 0 \qquad \hbox{ on } B\\ \iota_{{\bf n}_{\partial B}} \tilde{a} =& 0 \qquad \hbox{ on } \partial B, \end{aligned} \right. \end{equation*} where $F[\tilde{a}]$ is again given explicitly in terms of $\tilde{f}, \tilde{g}$ through the constraint equation. Hence standard elliptic arguments lead to higher regularity bounds for $\tilde{a}$ in terms of those for $F[\tilde{a}]$, which in turn follows from bounds for $\tilde{f}$, $\tilde{g}$. We omit the routine induction argument on $m$ that leads to the full proof. To state a local well-posedness theorem for \eqref{eq:CS-uni}, we choose the \emph{temporal gauge} \begin{equation*} \iota_{\partial_{t}} A = A_{0} = 0, \end{equation*} which has the nice feature of directly exhibiting the property of finite speed of propagation. To proceed further, we need to introduce some terminology. We say that a vector (or a direction) $X$ tangent to $\mathbb R^{1+2}$ is time-like [resp. null or space-like] if $\eta(X, X) < 0$ [resp. $\eta(X, X) = 0$ or $\eta(X, X) > 0$]. A time-like or null vector $X$ is said to be future-directed [resp. past-directed] if $\eta(X, \partial_{t}) < 0$ [resp. $\eta(X, \partial_{t}) > 0$]. Finally, given a subset $O \subseteq \Sigma_{t_{0}} = \set{t_{0}} \times \mathbb R^{2}$, we define the \emph{future} [resp. past] \emph{domain of dependence} $\calD^{+}(O)$ to be the set of all points $p \in \mathbb R^{1+2}$ such that all straight rays emanating from $p$ in the past- [resp. future-] directed time-like or null directions intersect with $O$. For instance, if $O = \set{t_{0}} \times B_{r_{0}}(x_{0})$ is the ball of radius $r_{0}$ centered at $x_{0}$ in $\Sigma_{t_{0}}$, then $\calD^{+}(O)$ is the cone \begin{equation*} \calD^{+}(O) = \set{(t, x) \in \mathbb R^{1+2} : t_{0} \leq t < t_{0} + r_{0}, \ \abs{x - x_{0}} < r_{0} + t_{0} - t} \end{equation*} We are now ready to state local in spacetime well-posedness of \eqref{eq:CS-uni} in the temporal gauge, which includes the finite speed of propagation property. \begin{theorem}[Local well-posedness in the temporal gauge] \label{thm:lwp-temporal} Let $B := \set{t_{0}} \times B_{r_{0}}(x_{0}) \subseteq \Sigma_{t_{0}}$ and $(a, f, g)$ a smooth initial data set for \eqref{eq:CS-uni} on $B$. Fix $ m \geq 3$, and let \begin{equation*} \tilde{\alpha}^{2} := \nrm{a}_{H^{m-1}(B)} + \nrm{{}^{(\Sigma_{t_{0}})} \delta a}_{H^{m-1}(B)} + \nrm{f}_{H^{m}(B)}^{2} + \nrm{g}_{H^{m-1}(B)}^{2}. \end{equation*} If $\tilde{\alpha} < \tilde{\alpha}_{\ast}(r_{0})$, where $\tilde{\alpha}_{\ast}(r_{0})$ is some fixed positive nondecreasing function of $r_{0}$, then there exists a unique smooth solution $(A, \phi)$ to the IVP for \eqref{eq:CS-uni} satisfying the temporal gauge condition $\iota_{\partial_{t}} A = 0$ in the set $\calD^{+}(B)$. Moreover, the solution obeys the bound \begin{equation*} \sup_{t \in [t_{0}, t_{0} + r_{0})} \Big( \nrm{A}_{H^{m-1}(B_{t})} + \nrm{\delta A}_{H^{m-1}(B_{t})} + \nrm{\phi}_{H^{m}(B_{t})}^{2} + \nrm{\partial_{t} \phi}_{H^{m-1}(B_{t})}^{2} \Big) \leq C \tilde{\alpha}^{2}, \end{equation*} where $B_{t} := \Sigma_{t} \cap \calD^{+}(B)$. \end{theorem} In the temporal gauge $\iota_{\partial_{t}} A = 0$, the Chern--Simons system \eqref{eq:CS-uni} becomes a coupled system of a Klein--Gordon equation for $\phi$ and transport equations for $A$ and $\delta A$ whose characteristics are precisely the constant $x$ curves. The precise form of the system can be found in Appendix~\ref{subsec:temporal}, using the formalism developed in Sections~\ref{subsec:extr-calc} and \ref{subsec:extr-calc-2}. The initial data for $(A, \phi)$ on $B$ are $(a, f, g)$ as in \eqref{eq:CS-uni-id}, whereas the initial data for $\delta A$ on $B$ is ${}^{(\Sigma_{t_{0}})} \delta a$, thanks to the temporal gauge condition. Theorem~\ref{thm:lwp-temporal} follows from a standard Picard iteration argument using the localized energy inequality for the wave equation in $\calD^{+}(B)$, integration along characteristics for the transport equation and the Sobolev inequality. We omit the details. \begin{proof} [Sketch of proof of Proposition~\ref{prop:initial-hyp}] Recall that $0 \leq \epsilon < \delta_{\ast \ast}(R)$ by hypothesis. Choosing $\delta_{\ast\ast}(R)$ sufficiently small, we may apply Proposition~\ref{prop:id-temporal} with $B = \set{2R} \times B_{3 R}(0)$ and $m = 5$ to find a gauge transform $(\tilde{a}, \tilde{f}, \tilde{g})$ of $(a, f, g)$ on $B$ obeying \begin{equation*} \nrm{\tilde{a}}_{H^{5}(B)} + \nrm{\tilde{f}}_{H^{5}(B)}^{2} + \nrm{\tilde{g}}_{H^{4}(B)}^{2} \leq C \epsilon^{2}. \end{equation*} Taking $\delta_{\ast \ast}(R)$ smaller if necessary, we may apply Theorem~\ref{thm:lwp-temporal} to $(\tilde{a}, \tilde{f}, \tilde{g})$ to construct a unique smooth solution $(\tilde{A}^{(in)}, \tilde{\phi}^{(in)})$ to \eqref{eq:CS-uni} in the temporal gauge on the set \begin{equation*} \mathcal Q^{(in)}:= \calD^{+}(B) = \set{(t,x) : 2 R \leq t < 5 R, \ \abs{x} < 5 R - t}, \end{equation*} which obeys the estimate \begin{equation} \label{eq:initial-hyp:in-sol} \sup_{t \in [2R, 5R)} \Big( \nrm{\tilde{A}^{(in)}}_{H^{4}(B_{t})} + \nrm{\delta \tilde{A}^{(in)}}_{H^{4}(B_{t})} + \nrm{\tilde{\phi}^{(in)}}_{H^{5}(B_{t})}^{2} + \nrm{\partial_{t} \tilde{\phi}^{(in)}}_{H^{4}(B_{t})}^{2} \Big) \leq C \epsilon^{2}, \end{equation} where $B_{t} = \Sigma_{t} \cap \calD^{+}(B) = \set{(t, x) : \abs{x} < 5R - t}$. Simply by undoing the gauge transformation from Proposition~\ref{prop:id-temporal}, we obtain from $(\tilde{A}^{(in)}, \tilde{\phi}^{(in)})$ a smooth solution to the IVP for the original data $(a, f, g)$ on $\mathcal Q^{(in)}$, which we denote by $(A^{(in)}, \phi^{(in)})$. To complete the proof, it remains to prove the existence of a smooth solution $(A^{(out)}, \phi^{(out)})$ to the IVP with the data $(a, f, g)$ on the set $\mathcal Q^{(out)} := \mathcal C_{R}^{c}$. Indeed, once we have $(A^{(out)}, \phi^{(out)})$ on $\mathcal Q^{(out)}$, the desired solution $(A, \phi)$ may be constructed by simply patching $(A^{(in)}, \phi^{(in)})$ and $(A^{(out)}, \phi^{(out)})$ on $\mathcal Q^{(in)} \cup \mathcal Q^{(out)} \supset \mathcal Q$. As we will see below, $\phi^{(out)} = 0$, which proves \eqref{eq:initial-hyp:fsp}. Furthermore, \eqref{eq:initial-hyp:est} follows from \eqref{eq:initial-hyp:fsp}, \eqref{eq:initial-hyp:in-sol} and \eqref{eq:CS-uni}. Uniqueness up to smooth local gauge transformations can be proved afterwards using Proposition~\ref{prop:id-temporal} and Theorem~\ref{thm:lwp-temporal}. Fix a ball $B$ contained in $\Sigma_{2R} \setminus (\set{2R} \times B_{R})$. Since $(f, g) = (0, 0)$ on $B$, Proposition~\ref{prop:id-temporal} implies the existence of a smooth gauge transformation $U_{B}$, such that the gauge transform $(\tilde{a}, \tilde{f}, \tilde{g})$ of $(a, f, g)$ by $U_{B}$ is identically zero on $B$. Therefore, by Theorem~\ref{thm:lwp-temporal} the unique smooth solution to \eqref{eq:CS-uni} on $\calD^{+}(B)$ with data $(\tilde{a}, \tilde{f}, \tilde{g})$ in the temporal gauge is the zero solution. Undoing the gauge transformation $U_{B}$, we obtain $(A^{(out)}, \phi^{(out)})$ on $\calD^{+}(B)$. Since $B$ is arbitrary and sets of the form $\calD^{+}(B)$ cover $\mathcal Q^{(out)}$, these local solutions patch up to the desired solution $(A^{(out)}, \phi^{(out)})$ on the whole region $\mathcal Q^{(out)}$. The fact that $\phi^{(out)} = 0$ is clear from the construction. \qedhere \end{proof} \subsection{Local well-posedness in the hyperboloidal foliation} In order to proceed, we need a local well-posedness theory of \eqref{eq:CS-uni} in the hyperboloidal foliation $\set{\mathcal H_{\tau}}_{\tau >0}$. For our purposes, it suffices to formulate analogues of Proposition~\ref{prop:id-temporal} and Theorem~\ref{thm:lwp-temporal} in this setting. Let ${\bf d}_{\mathcal H_{\tau_{0}}}(x, y)$ denote the geodesic distance between points $x$ and $y$ on $\mathcal H_{\tau_{0}}$. We define the geodesic ball ${}^{(\mathcal H_{\tau_{0}})} B_{r_{0}}(x_{0})$ with radius $r_{0}$ and center $x_{0}$ in $\mathcal H_{\tau_{0}}$ by \begin{equation*} {}^{(\mathcal H_{\tau_{0}})} B_{r_{0}}(x_{0}) = \set{x \in \mathcal H_{\tau_{0}} : {\bf d}_{\mathcal H_{\tau_{0}}}(x, x_{0}) < r_{0}}. \end{equation*} Given an open subset $O$ of $\mathcal H_{\tau_{0}}$ and a $V$-(or $\mathfrak{g}$-)valued $k$-form $v$ on $O$, let \begin{equation*} \nrm{v}_{H^{m}(O)}^{2} := \sum_{k=0}^{m} \nrm{{}^{(\mathcal H_{\tau_{0}})} \nabla^{(k)} v}_{L^{2}(O)}^{2}, \end{equation*} where ${}^{(\mathcal H_{\tau_{0}})} \nabla$ denotes the (induced) Levi-Civita connection on $\mathcal H_{\tau_{0}}$. If $v$ is defined on a larger set $\mathcal O \supset O$ (possibly an open set in the spacetime), then $\nrm{v}_{H^{m}(O)} := \nrm{v \restriction_{O}}_{H^{m}(O)}$. We are now ready to state the analogue of Proposition~\ref{prop:id-temporal} in $\mathcal H_{\tau_{0}}$. \begin{proposition}\label{prop:id-cronstrom} Let $B := {}^{(\mathcal H_{\tau_{0}})} B_{r_{0}}(x_{0}) \subseteq \mathcal H_{\tau_{0}}$ and $(a, f, g)$ an initial data set for \eqref{eq:CS-uni} on $B$ satisfying the constraint equation \eqref{eq:CS-uni-const}. For a fixed integer $m \geq 1$, let \begin{equation} \label{eq:id-size-cronstrom} \beta := \sum_{k=0}^{m} \nrm{{}^{(A, \calH_{\hT_{0}})}\bfD^{(k)} f}_{L^{2}(B)} + \sum_{k=0}^{m-1} \nrm{{}^{(A, \calH_{\hT_{0}})}\bfD_{x}^{(k)} g}_{L^{2}(B)}, \end{equation} where ${}^{(A, \calH_{\hT_{0}})}\bfD$ is the (induced) covariant derivative on $\mathcal H_{\tau_{0}}$. If $\beta < \beta_{\ast}(\tau_{0}, x_{0}, r_{0})$, where $\beta_{\ast}(\tau_{0}, x_{0}, r_{0})$ is some fixed positive function depending only on $\tau_{0}, x_{0}$ and $r_{0}$, then there exists a smooth gauge transformation $U$ on $B$ such that the gauge-transformed potential $\tilde{a} = U a U^{-1} - \mathrm{d} U U^{-1}$ satisfies \begin{align*} \iota_{{\bf n}_{\partial B}} \tilde{a} =& 0 \quad \hbox{ on } \partial B, \\ {}^{(\mathcal H_{\tau_{0}})} \delta \tilde{a} = & 0 \quad \hbox{ on } B. \end{align*} Here, $\partial B = \set{x \in \mathcal H_{\tau_{0}} : {\bf d}_{\mathcal H_{\tau_{0}}}(x, x_{0}) = r_{0}}$, ${\bf n}_{\partial B}$ is the outer normal vector field on $\partial B$ tangent to $\mathcal H_{\tau_{0}}$ and ${}^{(\mathcal H_{\tau_{0}})} \delta$ is the exterior codifferential on $\mathcal H_{\tau_{0}}$. Moreover, the gauge transformed initial data set $(\tilde{a}, \tilde{f} = U \cdot f, \tilde{g} = U \cdot g$) obeys the bounds \begin{align*} \nrm{\tilde{a}}_{H^{m}(B)} \leq & C(m) \beta^{2}, \\ \nrm{\tilde{f}}_{H^{m}(B)} + \nrm{\tilde{g}}_{H^{m-1}(B)} \leq &C(m) \beta. \end{align*} \end{proposition} The proof of Proposition~\ref{prop:id-cronstrom} proceeds exactly as that of Proposition~\ref{prop:id-temporal}; we skip the details. We now turn to the task of formulating a local (in spacetime) well-posedness theorem in the $\set{\mathcal H_{\tau}}_{\tau > 0}$ foliation. In order to fix the gauge ambiguity, we use the \emph{Cronstr\"om gauge condition}, which reads $x^{\mu} A_{\mu} = 0$ in the rectilinear coordinates. In the hyperboloidal polar coordinates, the gauge condition takes the form \begin{equation} \label{eq:cronstrom} \iota_{\partial_{\tau}} A = A_{\tau} = 0. \end{equation} This gauge is an analogue of the temporal gauge in the hyperboloidal foliation $\set{\mathcal H_{\tau}}_{\tau > 0}$. In the Cronstr\"om gauge, the analogue of Theorem~\ref{thm:lwp-temporal} reads as follows. \begin{theorem}[Local well-posedness in the Cronstr\"om gauge] \label{thm:lwp} Let $B:={}^{(\mathcal H_{\tau_{0}})} B_{r_{0}}(x_{0}) \subseteq \mathcal H_{\tau_{0}}$ and $(a, f, g)$ a smooth initial data set for \eqref{eq:CS-uni} on $B$. Fix $ m \geq 3$, and let \begin{equation*} \tilde{\beta}^{2} := \nrm{a}_{H^{m}(B)} + \nrm{{}^{(\mathcal H_{\tau_{0}})} \delta a}_{H^{m}(B)} + \nrm{f}_{H^{m}(B)}^{2} + \nrm{g}_{H^{m-1}(B)}^{2}. \end{equation*} If $\tilde{\beta} < \tilde{\beta}_{\ast}(\tau_{0}, x_{0}, r_{0})$, where $\tilde{\beta}_{\ast}(\tau_{0}, x_{0}, r_{0})$ is some positive nondecreasing function of $r_{0}$ for each fixed $\tau_{0}$ and $x_{0}$, then there exists a unique smooth solution $(A, \phi)$ to the IVP for \eqref{eq:CS-uni} satisfying the Cronstr\"om gauge condition \eqref{eq:cronstrom} in the set $\calD^{+}(B)$. Moreover, the solution obeys the bound \begin{equation*} \sup_{\tau \geq \tau_{0}} \Big( \nrm{A}_{H^{m-1}(B_{\tau})} + \nrm{\delta A}_{H^{m-1}(B_{\tau})} + \nrm{\phi}_{H^{m}(B_{\tau})}^{2} + \nrm{\partial_{\tau} \phi}_{H^{m-1}(B_{\tau})}^{2} \Big) \leq C \tilde{\beta}^{2}, \end{equation*} where $B_{\tau} := \mathcal H_{\tau} \cap \calD^{+}(B)$. \end{theorem} As in the case of the temporal gauge, the Chern--Simons system \eqref{eq:CS-uni} becomes a coupled system of a Klein-Gordon equation for $\phi$ and transport equations for $A$ and $\delta A$ whose characteristics are precisely the integral curves of the scaling vector field $S$ (or equivalently $\partial_{\tau}$). For the precise form of the system, we refer to Appendix~\ref{subsec:cronstrom}. Hence Theorem~\ref{thm:lwp} is again proved by a standard Picard iteration argument as in the case of Theorem~\ref{thm:lwp-temporal}. We remark that the finite speed of propagation of the transport equation (more precisely, the fact that the solution to the tranport equation on $\calD^{+}(B)$ is determined solely on the data on $B$) follows from the fact that the characteristics are causal curves. \subsection{Reduction to the main a priori estimate} Our goal now is to reduce the proof of the main theorems (Theorems~\ref{thm:CSH} and \ref{thm:CSD}) to establishing a priori estimates for solutions to \eqref{eq:CS-uni} with initial data obeying \eqref{eq:CSH-id} or \eqref{eq:CSD-id} with sufficiently small $\epsilon$. Before we state the main a priori estimates, we need to specify the class of solutions to which these estimates apply. As in the hypothesis of Proposition~\ref{prop:initial-hyp}, let $(a, f, g)$ be a smooth \eqref{eq:CSH} initial set obeying \eqref{eq:CSH-id} [resp. a smooth \eqref{eq:CSD} initial data set obeying \eqref{eq:CSD-id}] with $\epsilon \leq \delta_{\ast \ast}(R)$, and consider the IVP for \eqref{eq:CSH} [resp. \eqref{eq:CSD}] with data on $\set{t = 2R}$. Applying Proposition~\ref{prop:initial-hyp}, there exists a smooth solution $(A, \phi)$ (unique up to smooth local gauge transformations) to the IVP on $(\set{\tau \leq 2R} \cup \mathcal C_{R}^{c}) \cap \set{t \geq 2 R}$. Applying Theorem~\ref{thm:lwp}, $(A, \phi)$ extends as a smooth solution (again, unique up to smooth local gauge transformations) to a region of the form \begin{equation*} \mathcal O_{T} := (\set{\tau \leq T} \cup \mathcal C_{R}^{c}) \cap \set{t \geq 2R} \end{equation*} for some $T > 2R$. In order to show that $(A, \phi)$ extends to a global solution to the future, we would need to show that $T$ can be taken to be $+ \infty$, if $\epsilon > 0$ is sufficiently small. The following a priori estimates is a key step. \begin{proposition}[Main a priori estimate] \label{prop:main} There exists $\delta_{\ast} = \delta_{\ast}(R) > 0$ such that for any $T > 2R$, the following statements hold. Let $(A, \phi)$ be a smooth solution to the IVP for \eqref{eq:CS-uni} on the spacetime region $\mathcal O_{T}$ constructed by Proposition~\ref{prop:initial-hyp} and Theorem~\ref{thm:lwp} as above. If $\epsilon \leq \delta_{\ast}(R)$, then the solution obeys the following estimates for $2R \leq \tau \leq T$: \begin{itemize} \item {\bf $L^{2}$ bounds with growth.} For $0 \leq m \leq 4$, \begin{equation} \label{eq:main:L2} \nrm{\cosh y \bfZ^{(m)} \phi}_{L^{2}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} + \nrm{\bfT \bfZ^{(m)} \phi}_{L^{2}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \lesssim \epsilon \log^{m} (1+\tau) . \end{equation} \item {\bf Sharp $L^{\infty}$ decay.} \begin{equation} \label{eq:main:Linfty} \nrm{\cosh y \phi}_{L^{\infty}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} + \nrm{\cosh y \, \bfN \phi}_{L^{\infty}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} + \nrm{\bfT \phi}_{L^{\infty}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \lesssim \epsilon \tau^{-1}. \end{equation} \end{itemize} \end{proposition} Assuming the validity of Proposition~\ref{prop:main} for the moment, we may now prove Theorems~\ref{thm:CSH} and \ref{thm:CSD}. \begin{proof} [Proof of Theorems~\ref{thm:CSH} and \ref{thm:CSD} assuming Proposition~\ref{prop:main}] First, we note that uniqueness up to smooth local gauge transformations follows from Proposition~\ref{prop:id-temporal} and Theorem~\ref{thm:lwp-temporal}. Hence it only remains to show global existence of the smooth solution $(A, \phi)$ to the future. For this purpose, it suffices to show that if $(A, \phi)$ obeys the hypothesis of Proposition~ \ref{prop:main} on some $\mathcal O_{T}$, then it can be extended as a smooth solution to $\mathcal O_{T'}$ for some $T' > T$. Indeed, using Proposition~\ref{prop:main} we may then set up a simple continuity argument to prove global existence of $(A, \phi)$ to the future, as well as the desired estimates stated in Theorems~\ref{thm:CSH} and \ref{thm:CSD}. Such an extension statement is a consequence of \eqref{eq:main:L2}, Proposition~\ref{prop:id-cronstrom} and Theorem~\ref{thm:lwp}. Given any point $x_{0} \in \mathcal H_{T}$, there exists $r_{\ast} > 0$, depending on $\bfT \bfZ \phi, \ldots, \bfT \bfZ^{(4)} \phi$ and $x_{0}$, such that Proposition~\ref{prop:id-cronstrom} (with $m = 5$) applies to $(a, f, g) = (A, \phi, {}^{(A)}\bfD_{\partial_{\tau}} \phi) \restriction_{B}$ on $ B = {}^{(\mathcal H_{T})} B_{r_{\ast}}(x_{0})$, which produces a gauge-transformed data set $(\tilde{a}, \tilde{f}, \tilde{g})$ on $B$. Choosing $r_{\ast} > 0$ smaller if necessary, we may use Theorem~\ref{thm:lwp} to find a unique smooth solution $(\tilde{A}, \tilde{\phi})$ to \eqref{eq:CS-uni} on $\calD^{+}(B)$ in the Cronstr\"om gauge with data $(\tilde{a}, \tilde{f}, \tilde{g})$. Undoing the gauge transformation from Proposition~\ref{prop:id-cronstrom}, we arrive at a smooth extension of $(A, \phi)$ to $\calD^{+}({}^{(\mathcal H_{T})} B_{r_{\ast}}(x_{0}))$. Thanks to the support property \eqref{eq:initial-hyp:fsp}, such an extension need to be performed only for $x_{0}$ on the compact set $\mathcal C_{R} \cap \mathcal H_{T}$. Therefore, we can find $\mathcal O_{T'}$ with $T' > T$ to which $(A, \phi)$ extends as desired. \qedhere \end{proof} As described in Section~\ref{subsec:outline}, Sections~\ref{sec:covVF}--\ref{sec:BA} of this article are devoted to the proof of the main a priori estimates (Proposition~\ref{prop:main}). \section{Gauge covariant vector field method} \label{sec:covVF} In this section, we develop the machinery of gauge covariant vector field method for the covariant Klein--Gordon equation \begin{equation*} {}^{(A)} \Box \phi - \phi = N. \end{equation*} Throughout this section, we denote by $A$ a $\mathfrak{g}$-valued connection 1-form, and by $\phi$ a $V$-valued function. We furthermore assume that $\phi$ and $N$ are sufficiently smooth and decaying towards the spatial infinity (for instance, $\phi(t), N(t) \in \mathcal S$ uniformly in $t$). \subsection{A covariant energy inequality} \label{subsec:energy} In this subsection, we derive an energy inequality for the covariant Klein--Gordon equation on constant $\tau$-hypersurfaces $\mathcal H_{\tau}$ using the time-like Killing vector field $T_{0}$. This inequality is fundamental to our development of the gauge covariant version of the vector field method. The main result is as follows: \begin{proposition}[Energy inequality for covariant Klein--Gordon equation] \label{prop:en} Suppose that the curvature 2-form $F$ satisfies the bound \begin{equation} \label{eq:en:F} \int_{\tau_{0}}^{\tau_{1}} \sup_{\mathcal H_{\tau}} \Big( \sum_{\mu} \abs{F(T_{\mu}, T_{0})}^{2} \Big)^{1/2} \, \mathrm{d} \tau \leq C_{F}. \end{equation} for some constant $0 < C_{F} < \infty$. Then there exists a constant $C = C(C_{F}) > 0$ such that for all $\tau_{0} \leq \tau \leq \tau_{1}$, we have \begin{align} \Big( \int_{\mathcal H_{\tau}} \bfe_{\mathcal H_{\tau}}[\phi] \, \mathrm{d} \sgm_{\mathcal H_{\tau}} \Big)^{1/2} \leq C \Big( \int_{\mathcal H_{\tau_{0}}} \bfe_{\mathcal H_{\tau_{0}}}[\phi] \, \mathrm{d} \sgm_{\mathcal H_{\tau_{0}}} \Big)^{1/2} + C \int_{\tau_{0}}^{\tau} \wnrm{\cosh y ({}^{(A)} \Box - 1) \phi(\tau')}_{L^{2}_{\tau'}}\, \mathrm{d} \tau', \label{eq:en} \end{align} where the energy density $\bfe_{\mathcal H_{\tau}}[\phi]$ is defined in \eqref{eq:ed} below. Moreover, there exist constants $c, C' > 0$ such that the integral of $\bfe_{\mathcal H_{\tau}}[\phi]$ obeys the following lower bounds: \begin{align} \label{eq:en-phi} \int_{\mathcal H_{\tau}} \bfe_{\mathcal H_{\tau}}[\phi] \, \mathrm{d} \sgm_{\mathcal H_{\tau}} \geq& c \Big( \wnrm{\cosh y \phi}_{L^{2}_{\tau}}^{2} + \wnrm{\bfN \phi}_{L^{2}_{\tau}}^{2} + \wnrm{\tau^{-1} \bfZ \phi}_{L^{2}_{\tau}}^{2} + \wnrm{\bfT \phi}_{L^{2}_{\tau}}^{2} \Big) \end{align} \begin{equation} \label{eq:en-Nphi} \begin{aligned} \int_{\mathcal H_{\tau}} \bfe_{\mathcal H_{\tau}}[\phi] \, \mathrm{d} \sgm_{\mathcal H_{\tau}} + \sum_{\mu, \nu} \frac{C'}{\tau^{2}} \int_{\mathcal H_{\tau}} \bfe_{\mathcal H_{\tau}}[\bfZ_{\mu \nu} \phi] \, \mathrm{d} \sgm_{\mathcal H_{\tau}} \geq c \wnrm{\cosh y \bfN \phi}_{L^{2}_{\tau}}^{2} \end{aligned} \end{equation} \end{proposition} We remind the reader that $\wnrm{\cdot}_{L^{p}_{\tau}} = {\nrm{\cdot}_{L^{p}(\mathcal H_{\tau}, \frac{\mathrm{d} \sigma}{\cosh y})}}$. \begin{proof} In this proof, it will be convenient to employ the abstract index notation for tensors. Then the energy-momentum tensor associated to the covariant Klein--Gordon equation $({}^{(A)} \Box - 1) \phi = 0$ may be written as \begin{equation*} \calQ[\phi]_{a b} = \mathrm{Re} \brk{{}^{(A)}\bfD_{a} \phi, {}^{(A)}\bfD_{b} \phi} - \frac{1}{2} \eta_{a b} (\brk{{}^{(A)}\bfD_{c} \phi, {}^{(A)}\bfD^{c} \phi} + \brk{\phi, \phi}). \end{equation*} It can be easily verified that $\calQ[\phi]_{ab}$ is symmetric in $a,b$ and satisfies \begin{align} \nabla^{a} \calQ[\phi]_{a b} =& \mathrm{Re} \brk{({}^{(A)} \Box \phi - \phi), {}^{(A)}\bfD_{b} \phi} + \mathrm{Re} \brk{F_{ab} \cdot \phi, {}^{(A)}\bfD^{a} \phi}. \label{eq:div4EM} \end{align} Given a vector field $X$, we may define the 1- and 0-currents associated to $X$ by \begin{align*} \vC{X}[\phi]_{a} :=& \calQ[\phi]_{ab} X^{b}, \\ \sC{X}[\phi] :=& \frac{1}{2} \calQ[\phi]_{ab} (\defT{X}_{\sharp})^{ab}, \end{align*} where $\defT{X}_{ab}$ is the deformation tensor of $X$, given by \begin{equation*} \defT{X}_{ab} = \nabla_{a} X_{b} + \nabla_{b} X_{a} \end{equation*} and $(\defT{X}_{\sharp})^{ab}$ is its metric dual, i.e., $(\defT{X}_{\sharp})^{ab} := \defT{X}_{cd} \, \eta^{ca} \eta^{db}$. The currents ${}^{(X)}P_{a}$ and $\sC{X}$ satisfy the divergence identity \begin{equation} \label{eq:en-div} \nabla^{a} (\vC{X}[\phi])_{a} = \sC{X}[\phi] + (\nabla^{a} \calQ_{ab}[\phi]) X^{b}. \end{equation} We now derive the energy inequality that will be used below, by considering the associated currents to the time-like Killing vector field $T_{0} = \partial_{t}$. Since $T_{0}$ is Killing, it satisfies \begin{equation*} \nabla^{a} T_{0}^{b} + \nabla^{b} T_{0}^{a} = 0. \end{equation*} It follows that $\sC{T_{0}} = 0$ and therefore, by \eqref{eq:div4EM}, we have \begin{equation} \label{eq:en-div-T} \nabla^{a} (\vC{T_{0}}[\phi])_{a} = \Big( \mathrm{Re} \brk{({}^{(A)} \Box \phi - \phi), {}^{(A)}\bfD_{b} \phi} + \mathrm{Re} \brk{F_{ab} \cdot \phi, {}^{(A)}\bfD^{a} \phi} \Big) (T_{0})^{b}. \end{equation} Integrating this identity over the spacetime region $\set{(\tau', y, \theta) : \tau_{0} \leq \tau' \leq \tau}$ and applying the divergence theorem, we obtain \begin{align*} \int_{\mathcal H_{\tau}} \bfe_{\mathcal H_{\tau}}[\phi] \, \mathrm{d} \sigma_{\mathcal H_{\tau}} = & \int_{\mathcal H_{\tau_{0}}} \bfe_{\mathcal H_{\tau_{0}}}[\phi] \, \mathrm{d} \sigma_{\mathcal H_{\tau_{0}}} - \int_{\tau_{0}}^{\tau} \int \mathrm{Re}\brk{({}^{(A)} \Box - 1) \phi, \bfT_{0} \phi} \, \mathrm{d} \sigma_{\mathcal H_{\tau'}} \, \mathrm{d} \tau' \\ & - \int_{\tau_{0}}^{\tau} \int (\eta^{-1})^{\mu \nu} \mathrm{Re} \brk{F(T_{\mu}, T_{0}) \cdot \phi, \bfT_{\nu} \phi} \, \mathrm{d} \sigma_{\mathcal H^{\tau'}} \, \mathrm{d} \tau' \end{align*} and the energy density $\bfe_{\mathcal H_{\tau}}[\phi]$ is defined as \begin{equation} \label{eq:ed} \bfe_{\mathcal H_{\tau}}[\phi] = \vC{T_{0}}[\phi](N) = \calQ[\phi](T_{0}, N). \end{equation} Assume, for the moment, that $E(\tau)$ obeys the lower bound \eqref{eq:en-phi}. We introduce the function \begin{equation*} E(\tau) = \sup_{\tau_{0} \leq \tau' \leq \tau} \int_{\mathcal H_{\tau'}} \bfe_{\mathcal H_{\tau'}}[\phi] \, \mathrm{d} \sigma_{\mathcal H_{\tau'}}, \end{equation*} which is non-decreasing. By \eqref{eq:en-phi}, Cauchy--Schwarz and H\"older's inequality, we arrive at the bound \begin{align*} E(\tau) \leq & E(\tau_{0}) + \frac{1}{c^{1/2}} \int_{\tau_{0}}^{\tau} \wnrm{\cosh y ({}^{(A)} \Box - 1) \phi}_{L^{2}_{\tau'}} E(\tau')^{1/2} \, \mathrm{d} \tau' \\ & + \frac{1}{c} \int_{\tau_{0}}^{\tau} \Big( \sup_{\mathcal H_{\tau'}} \Big( \sum_{\mu} \abs{F(T_{\mu}, T_{0})}^{2} \Big)^{1/2} \Big) E(\tau') \, \mathrm{d} \tau'. \end{align*} Using the fact that $E(\tau)$ is non-decreasing, we may pull out a factor of $E(\tau)^{1/2}$ from each term on the right-hand side, which can then be cancelled on both sides. Then applying Gronwall's inequality to handle the last term, \eqref{eq:en} follows. To complete the proof of the proposition, it only remains to verify the bounds \eqref{eq:en-phi} and \eqref{eq:en-Nphi}. In the hyperboloidal polar coordinates, $T_{0}$ can be written as \begin{equation*} T_{0} = \cosh y \partial_{\tau} - \sinh y \frac{\partial_{y}}{\tau}. \end{equation*} Note furthermore that $N = \partial_{\tau}$ is the future-pointing unit normal to each $\mathcal H_{\tau}$. Therefore, the energy density associated to $T_{0}$ on $\mathcal H_{\tau}$ is given by \begin{equation} \label{eq:ed-first} \begin{aligned} \bfe_{\mathcal H_{\tau}}[\phi] =& \cosh y \calQ[\phi](\partial_{\tau}, \partial_{\tau}) - \sinh y \calQ[\phi](\frac{\partial_{y}}{\tau}, \partial_{\tau}) \\ =& \frac{1}{2} \cosh y \Big( \abs{{}^{(A)}\bfD_{\tau} \phi}^{2} + \abs{\frac{1}{\tau} {}^{(A)}\bfD_{y} \phi}^{2} \Big) - \sinh y \, \mathrm{Re} \brk{\frac{1}{\tau} {}^{(A)}\bfD_{y} \phi, {}^{(A)}\bfD_{\tau} \phi} \\ & + \frac{1}{2} \cosh y \Big( \abs{\frac{1}{\tau \sinh y} {}^{(A)}\bfD_{\theta} \phi}^{2} + \abs{\phi}^{2} \Big) \end{aligned} \end{equation} By Cauchy-Schwarz, we have \begin{align*} \bfe[\phi] \geq & \frac{1}{2} \Big( \cosh y \abs{\phi}^{2} +e^{-y} \abs{{}^{(A)}\bfD_{\tau} \phi}^{2} + e^{-y} \abs{\frac{1}{\tau}{}^{(A)}\bfD_{y} \phi}^{2} + \cosh y \abs{\frac{1}{\tau \sinh y} {}^{(A)}\bfD_{\theta} \phi}^{2} \Big) \\ \geq & \frac{1}{2 \cosh y} \Big( \abs{\cosh y \phi}^{2} + \abs{{}^{(A)}\bfD_{\tau} \phi}^{2} + \abs{\frac{1}{\tau} {}^{(A)}\bfD_{y} \phi}^{2} + \abs{\frac{1}{\tau } \frac{\cosh y}{\sinh y} {}^{(A)}\bfD_{\theta} \phi}^{2} \Big). \end{align*} Integrating over $\mathcal H_{\tau}$ with respect to the induced measure $\mathrm{d} \sigma_{\mathcal H_{\tau}}$, then applying \eqref{eq:normal}, \eqref{eq:dTht}, \eqref{eq:Z01} and \eqref{eq:Z02}, we obtain \begin{equation*} \int_{\mathcal H_{\tau}} \bfe[\phi] \, \mathrm{d} \sigma_{\mathcal H_{\tau}} \geq c \Big( \wnrm{\cosh y \phi}_{L^{2}_{\tau}}^{2} + \wnrm{\tau^{-1} \bfZ \phi}_{L^{2}_{\tau}}^{2} + \wnrm{\bfN \phi}_{L^{2}_{\tau}}^{2} \Big). \end{equation*} Combined with the simple pointwise bound \begin{equation*} \abs{\bfT \phi} \leq C \cosh y ( \abs{\bfN \phi} + \tau^{-1} \abs{\bfZ \phi}), \end{equation*} the desired lower bound \eqref{eq:en-phi} follows. To prove \eqref{eq:en-Nphi}, note first that \begin{align*} \sinh y \, \mathrm{Re} \brk{\frac{1}{\tau} {}^{(A)}\bfD_{y} \phi, {}^{(A)}\bfD_{\tau} \phi} \leq& \cosh y \abs{\tau^{-1} {}^{(A)}\bfD_{y} \phi}^{2} + \frac{1}{4} \cosh y \abs{{}^{(A)}\bfD_{\tau} \phi}^{2} \\ \leq& \frac{C'}{\tau^{2}} \sum_{\mu, \nu} \bfe[\bfZ_{\mu \nu} \phi] + \frac{1}{4} \cosh y \abs{{}^{(A)}\bfD_{\tau} \phi}^{2}, \end{align*} for some $C' > 0$. Combined with \eqref{eq:ed-first}, we see that there exists $c > 0$ such that \begin{equation} \label{eq:energy4S} \bfe[\phi] + \frac{C'}{\tau^{2}} \sum_{\mu, \nu} \bfe[\bfZ_{\mu \nu} \phi] \geq c \cosh y \abs{{}^{(A)}\bfD_{\tau} \phi}^{2} = \frac{c}{\cosh y} \abs{\cosh y \bfN \phi}^{2} , \end{equation} from which \eqref{eq:en-Nphi} follows. \qedhere \end{proof} As a consequence of the identity \eqref{eq:en-div-T} in the proceeding proof, we may relate the energy on the initial hyperboloid $\mathcal H_{\tau_{0}}$ with the energy on the constant time hypersurface $\Sigma_{t_{0}} = \set{t = t_{0}}$. This result will be used later to prove Lemma~\ref{lem:BA:ini}, which justifies the bootstrap assumptions at the initial hypersurface $\mathcal H_{2R}$. \begin{lemma} \label{lem:ini-en} Let $\tau_{0} \geq t_{0}$. Then we have \begin{align*} \int_{\mathcal H_{\tau_{0}}} \bfe_{\mathcal H_{\tau}}[\phi](\tau_{0}) \, \mathrm{d} \sgm_{\mathcal H_{\tau_{0}}} \leq& \int_{\Sigma_{t_{0}}} \bfe_{\Sigma_{t_{0}}}[\phi] \, \mathrm{d} x^{1} \mathrm{d} x^{2} \\ & + \int_{\mathcal R_{t=t_{0}}^{\tau=\tau_{0}}} \Big( \abs{({}^{(A)} \Box - 1) \phi } \abs{\bfT_{0} \phi} + \sum_{\mu} \abs{F(T_{\mu}, T_{0})} \abs{\bfT_{\mu} \phi} \Big)\, \mathrm{d} t \mathrm{d} x^{1} \mathrm{d} x^{2} \end{align*} where $\Sigma_{t_{0}} = \set{t=t_{0}}$, $\bfe_{\Sigma_{t_{0}}}[\phi] = \frac{1}{2} \sum_{\mu} \abs{\bfT_{\mu} \phi}^{2} + \frac{1}{2} \abs{\phi}^{2}$ and \begin{equation} \label{eq:ini-en:region} \mathcal R_{t=t_{0}}^{\tau = \tau_{0}} := \set{(x^{0}, x^{1}, x^{2}) \in \mathbb R^{1+2} : x^{0} \geq t_{0}, \, (x^{0})^{2} - (x^{1})^{2} - (x^{2})^{2} \leq \tau_{0}}. \end{equation} \end{lemma} \begin{proof} Note that $\bfe_{\Sigma_{t_{0}}} = \vC{T_{0}}(T_{0})$, where $T_{0} = \partial_{t}$ is the future pointing unit normal to $\Sigma_{t_{0}}$. The lemma is an immediate consequence of integrating the identity \eqref{eq:en-div-T} over $\mathcal R_{t=t_{0}}^{\tau = \tau_{0}}$ and applying the divergence theorem. \qedhere \end{proof} \subsection{Gauge invariant Klainerman--Sobolev inequality} In this subsection, we derive a gauge invariant version of the Klainerman--Sobolev inequality, which constitutes another key ingredient of the gauge covariant vector field method. \begin{proposition} \label{prop:KlSob} Let $\phi$ be a smooth $V$-valued function on $\mathcal H_{\tau}$. Then we have \begin{equation} \label{eq:KlSob} \tau \nrm{\cosh y \phi}_{L^{\infty}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \leq C \sum_{k : 0 \leq k \leq 2} \nrm{\cosh y \bfZ^{(k)} \phi}_{L^{2}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})}. \end{equation} \end{proposition} Gauge invariant Klainerman--Sobolev inequality of this type was first established by Psarelli \cite{MR2131047,MR1672001} in $\mathbb R^{1+3}$. To make the present paper self-contained, we sketch a proof of Proposition~\ref{prop:KlSob}. \begin{remark} Recall that in the rectilinear coordinates $(t, x^{1}, x^{2})$, we have $t = \tau \cosh y$. Therefore, if the norm on the right-hand side were bounded, then \eqref{eq:KlSob} would imply that $\phi$ decays with the rate $t^{-1}$, which is sharp for the Klein--Gordon equation on $\mathbb R^{1+2}$. In our application below, however, the norm on the right-hand side will grow in $\tau$, which will result in a loss of decay. \end{remark} In our proof of Proposition \ref{prop:KlSob}, we will employ some standard Sobolev inequalities on $\mathbb R^{2}$ and $\mathbb S^{1}$. For the reader's convenience, we state (without proofs) the necessary inequalities in the next lemma. \begin{lemma} \label{lem:stdSob} The following statements hold. \begin{enumerate} \item Let $\phi$ be a function in the Sobolev space $W^{1, 2}(\set{x \in \mathbb R^{2} : \abs{x} \leq 1})$. Then we have \begin{equation} \label{eq:stdSob:L2L4} \nrm{\phi}_{L^{4}(\set{x \in \mathbb R^{2} : \abs{x} \leq 1}, \mathrm{d} x)} \leq C \nrm{\phi}_{W^{1,2}(\set{x \in \mathbb R^{2} : \abs{x} \leq 1}, \mathrm{d} x)}. \end{equation} \item Let $\phi$ be a function in the Sobolev space $W^{1, 4}(\set{x \in \mathbb R^{2} : \abs{x} \leq 1})$. Then we have \begin{equation} \label{eq:stdSob:L4Linfty} \nrm{\phi}_{L^{\infty}(\set{x \in \mathbb R^{2} : \abs{x} \leq 1}, \mathrm{d} x)} \leq C \nrm{\phi}_{W^{1,4}(\set{x \in \mathbb R^{2} : \abs{x} \leq 1}, \mathrm{d} x)}. \end{equation} \item Let $\phi$ be a function in the Sobolev space $W^{1,2}(\mathbb S^{1})$. Then we have \begin{equation} \label{eq:stdSob:S1} \nrm{\phi}_{L^{\infty}(\mathbb S^{1})} \leq C \nrm{\phi}_{W^{1,2}(\mathbb S^{1})}. \end{equation} \end{enumerate} \end{lemma} Another important ingredient of our proof is a version of the \emph{diamagnetic inequality} (also commonly referred to as \emph{Kato's inequality}), which allows us to relate covariant derivatives of a $V$-valued function with ordinary derivatives of its amplitude. \begin{lemma}[Diamagnetic inequality] \label{lem:diamag} For any vector field $X$ and smooth $V$-valued function $\phi$ on $\mathcal H_{1}$ or $\mathbb S^{1}$, we have \begin{equation} \label{eq:diamag} \partial_{X} \abs{\phi} \leq \abs{{}^{(A)}\bfD_{X} \phi}, \end{equation} in the sense of distributions, i.e., the inequality holds after testing against smooth non-negative compactly supported functions on $\mathcal H_{1}$ or $\mathbb S^{1}$. By the dual characterization of $L^{p}$ norms, it follows that $\partial_{X} \abs{\phi} \in L^{p}$ and $\nrm{\partial_{X} \abs{\phi}}_{L^{p}} \leq \nrm{{}^{(A)}\bfD_{X} \phi}_{L^{p}}$ for all $1 \leq p < \infty$. \end{lemma} We omit the standard proof. \begin{proof} [Proof of Proposition \ref{prop:KlSob}] By scaling, it suffices to prove the following inequality for smooth compactly supported functions $\phi$ on $\mathcal H_{1}$: \begin{equation} \label{eq:KlSob:key} \cosh y \abs{\phi(y, \theta)} \leq C \sum_{\alpha : 0 \leq \abs{\alpha} \leq 2} \Big( \int_{0}^{\infty} \int_{\mathbb S^{1}} \cosh y' \abs{{\bf Z}^{\alpha} \phi(y', \theta')}^{2}\, \sinh y' \mathrm{d} \theta' \mathrm{d} y'\Big)^{\frac{1}{2}}. \end{equation} We begin by establishing \eqref{eq:KlSob:key} in the region $y \leq 1$. In this region, the point is that \eqref{eq:KlSob:key} reduces to its unweighted analogue on $\mathbb R^{2}$ through the relations \begin{equation} \label{eq:KlSob:cptY} \cosh y \simeq 1, \quad \sinh y \simeq y. \end{equation} Here, the notation $A \simeq B$ means that there exist positive constants $0 < c \leq C$ such that $cA \leq B \leq CA$. By \eqref{eq:stdSob:L2L4}, \eqref{eq:KlSob:cptY}, the diamagnetic inequality \eqref{eq:diamag} with $X = \partial_{y}, \frac{1}{y} \partial_{\theta}$ and the relations \eqref{eq:dY}, \eqref{eq:wdTht}, we have \begin{align*} \nrm{\phi}_{L^{4}(\mathcal H_{1} \cap \set{y \leq 1})} \leq & C \Big( \nrm{\partial_{y}\abs{\phi}}_{L^{2}(\mathcal H_{1} \cap \set{y \leq 1})} + \nrm{\frac{1}{y} \partial_{\theta}\abs{\phi}}_{L^{2}(\mathcal H_{1} \cap \set{y \leq 1})} + \nrm{\phi}_{L^{2}(\mathcal H_{1} \cap \set{y \leq 1})} \Big) \\ \leq & C \sum_{\alpha: 0 \leq \abs{\alpha} \leq 1} \nrm{\bfZ^{\alpha} \phi}_{L^{2}(\mathcal H_{1} \cap \set{y \leq 1})}, \end{align*} and similarly \begin{align*} \nrm{\bfZ \phi}_{L^{4}(\mathcal H_{1} \cap \set{y \leq 1})} \leq & C \sum_{\alpha: 1 \leq \abs{\alpha} \leq 2} \nrm{\bfZ^{\alpha} \phi}_{L^{2}(\mathcal H_{1} \cap \set{y \leq 1})}. \end{align*} Repeating the preceding argument with \eqref{eq:stdSob:L2L4} replaced by \eqref{eq:stdSob:L4Linfty}, we have \begin{align*} \nrm{\phi}_{L^{\infty}(\mathcal H_{1} \cap \set{y \leq 1})} \leq & C \sum_{\alpha: 0 \leq \abs{\alpha} \leq 1} \nrm{\bfZ^{\alpha} \phi}_{L^{4}(\mathcal H_{1} \cap \set{y \leq 1})}. \end{align*} Putting together the previous three inequalities and using $\cosh y \simeq 1$ to build in the appropriate weights, the desired inequality \eqref{eq:KlSob:key} in the region $\set{y \leq 1}$ follows. Next, we turn to the task of proving \eqref{eq:KlSob:key} in the region $y \geq 1$. Using the fundamental theorem of calculus, Cauchy-Schwarz and \eqref{eq:dY}, we compute \begin{align*} \cosh^{2} y \abs{\phi(y, \theta)}^{2} \leq & 2 \frac{\cosh y}{\sinh y} \abs{\int_{y}^{\infty} \cosh y' \mathrm{Re} \brk{\phi, {}^{(A)}\bfD_{\partial_{y}} \phi}(y', \theta) \sinh y' \, \mathrm{d} y' } \\ \leq & C \int_{0}^{\infty} \cosh y' (\abs{\phi}^{2} + \abs{\bfZ \phi}^{2})(y', \theta) \sinh y' \, \mathrm{d} y'. \end{align*} We have used the fact that $\frac{\cosh y}{\sinh y} \leq C$, which holds since $y \geq 1$. Applying the previous computation to ${}^{(A)}\bfD_{\partial_{\theta}} \phi = \bfZ_{12} \phi$, we obtain \begin{align*} \cosh^{2} y \abs{{}^{(A)}\bfD_{\partial_{\theta}} \phi(y, \theta)}^{2} \leq & C \int_{0}^{\infty} \cosh y' (\abs{\bfZ \phi}^{2} + \abs{\bfZ^{(2)} \phi}^{2})(y', \theta) \sinh y' \, \mathrm{d} y'. \end{align*} Integrating the preceding two inequalities over $\theta \in \mathbb S^{1}$, we obtain \begin{align*} & \cosh^{2} y \int_{\mathbb S^{1}} \abs{\phi(y, \theta)}^{2} + \abs{{}^{(A)}\bfD_{\partial_{\theta}} \phi(y, \theta)}^{2} \, \mathrm{d} \theta \\ & \quad \leq C \sum_{\alpha : 0 \leq \abs{\alpha} \leq 2} \int_{0}^{\infty} \int_{\mathbb S^{1}} \cosh^{2} y' \abs{{\bf Z}^{\alpha} \phi(\tau, y', \theta')}^{2}\, \frac{\sinh y' \mathrm{d} \theta' \mathrm{d} y}{\cosh y'}. \end{align*} Now the desired inequality \eqref{eq:KlSob:key} follows from the combination of the standard Sobolev inequality \eqref{eq:stdSob:S1} and the diamagnetic inequality \eqref{eq:diamag} (with $X = \partial_{\theta}$) on $\mathbb S^{1}$. \qedhere \end{proof} \subsection{A gauge invariant ODE argument for sharp decay} \label{subsec:ODE} Due to the specific structure of our problem, it turns out that the combination of the energy and the Klainerman--Sobolev inequality is insufficient. What we need is a version of the ODE argument \cite{MR2188297, MR2056833} devised to handle the modified scattering behavior due to a long range effect, adapted to the gauge covariant setting. \begin{proposition} \label{prop:ODE} For every $(y, \theta) \in [0, \infty) \times \mathbb S^{1}$ and $\tau \in (0, \infty)$, the following inequality holds: \begin{equation} \label{eq:ODE} \begin{aligned} &\hskip-2em \abs{{}^{(A)}\bfD_{\tau} (\tau \cosh y \phi) (\tau, y, \theta)} + \tau \cosh y \abs{\phi (\tau, y, \theta)} \\ \leq & C \Big( \abs{{}^{(A)}\bfD_{\tau} (\tau \cosh y \phi) (\tau_{0}, y, \theta)} + \tau \cosh y \abs{\phi (\tau_{0}, y, \theta)} \Big) \\ & + C \sum_{k: 1 \leq k \leq 2} \int_{\tau_{0}}^{\tau} \frac{\cosh y}{\tau'} \abs{\bfZ^{(k)} \phi(\tau', y, \theta)} \, \mathrm{d} \tau' \\ & + C \int_{\tau_{0}}^{\tau} \tau' \cosh y \abs{({}^{(A)} \Box - 1)\phi(\tau', y, \theta)} \, \mathrm{d} \tau'. \end{aligned}\end{equation} \end{proposition} Our proof of this proposition is based on the following algebraic computation, which relates the induced covariant Laplacian on $\mathcal H_{\tau}$ with $\omega_{j}$'s and $\bfZ_{\mu \nu}$'s. \begin{lemma} \label{lem:covLapOnH} Let $\triangle_{A, \mathcal H_{\tau}}$ be the induced covariant Laplacian on $\mathcal H_{\tau}$, i.e., \begin{equation*} \triangle_{A, \mathcal H_{\tau}} := \frac{1}{\tau^{2}} \Big( \frac{1}{\sinh y} {}^{(A)}\bfD_{\partial_{y}} (\sinh y {}^{(A)}\bfD_{\partial_{y}}) + \frac{1}{\sinh^{2} y} {}^{(A)}\bfD_{\partial_{\theta}}^{2} \Big). \end{equation*} Then the following identity holds. \begin{equation} \label{eq:covLapOnH} \begin{aligned} \triangle_{A, \mathcal H_{\tau}} =& - \frac{\sinh^{2} y}{\tau^{2} \cosh^{2} y} \Big( \omega_{1}^{2} \bfZ_{02}^{2} + \omega_{2}^{2} \bfZ_{01}^{2} - \omega_{1} \omega_{2} (\bfZ_{01} \bfZ_{02} + \bfZ_{02} \bfZ_{01}) \Big) \\ & + \frac{1}{\tau^{2}} (\bfZ_{01}^{2} + \bfZ_{02}^{2} ) - \frac{\sinh y}{\tau^{2} \cosh y} (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02}) \end{aligned} \end{equation} \end{lemma} \begin{proof} Using \eqref{eq:Z01}, \eqref{eq:Z02} and the fact that $\omega_{1}^{2} + \omega_{2}^{2} =1$, we compute \begin{align*} \frac{1}{\sinh y} {}^{(A)}\bfD_{\partial_{y}} (\sinh y {}^{(A)}\bfD_{\partial_{y}}) =& {}^{(A)}\bfD_{\partial_{y}}^{2} + \frac{\cosh y}{\sinh y} {}^{(A)}\bfD_{\partial_{y}} \\ =& (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02})^{2} - \frac{\cosh y}{\sinh y} (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02}) \\ =& \omega_{1}^{2} \bfZ_{01}^{2} + \omega_{2}^{2} \bfZ_{02}^{2} + \omega_{1} \omega_{2} (\bfZ_{01} \bfZ_{02} + \bfZ_{02} \bfZ_{01}) \\ & - \frac{\cosh y}{\sinh y} (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02}). \end{align*} Similarly, using \eqref{eq:wdTht}, we have \begin{align*} \frac{\cosh^{2} y}{\sinh^{2} y} {}^{(A)}\bfD_{\partial_{\theta}}^{2} =& \Big( \omega_{1} \bfZ_{02} - \omega_{2} \bfZ_{01} \Big)^{2} \\ =& \omega_{1} \Big( \omega_{1} \bfZ_{02} - \omega_{2} \bfZ_{01} \Big) \bfZ_{02} - \omega_{2} \Big( \omega_{1} \bfZ_{02} - \omega_{2} \bfZ_{01} \Big) \bfZ_{01} \\ & - \frac{\cosh y}{\sinh y} \partial_{\theta} \omega_{1} \bfZ_{02} + \frac{\cosh y}{\sinh y} \partial_{\theta} \omega_{2} \bfZ_{01} \\ =& \omega_{1}^{2} \bfZ_{02}^{2} + \omega_{2}^{2} \bfZ_{01}^{2} - \omega_{1} \omega_{2} (\bfZ_{01} \bfZ_{02} + \bfZ_{02} \bfZ_{01}) + \frac{\cosh y}{\sinh y} (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02}). \end{align*} Hence \begin{align*} & \frac{1}{\sinh y} {}^{(A)}\bfD_{\partial_{y}} (\sinh y {}^{(A)}\bfD_{\partial_{y}}) + \frac{1}{\sinh^{2} y} {}^{(A)}\bfD_{\partial_{\theta}}^{2} \\ & \quad = \frac{1}{\sinh y} {}^{(A)}\bfD_{\partial_{y}} (\sinh y {}^{(A)}\bfD_{\partial_{y}}) + \frac{\cosh^{2} y}{\sinh^{2} y} {}^{(A)}\bfD_{\partial_{\theta}}^{2} - {}^{(A)}\bfD_{\partial_{\theta}}^{2} \\ &\quad = \bfZ_{01}^{2} + \bfZ_{02}^{2} - \frac{\sinh^{2} y}{\cosh^{2} y} \Big( \omega_{1}^{2} \bfZ_{02}^{2} + \omega_{2}^{2} \bfZ_{01}^{2} - \omega_{1} \omega_{2} (\bfZ_{01} \bfZ_{02} + \bfZ_{02} \bfZ_{01}) \Big) \\ & \qquad - \frac{\sinh y}{\cosh y} (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02}). \end{align*} Recalling the definition of $\triangle_{A, \mathcal H_{\tau}}$, the lemma follows. \qedhere \end{proof} With Lemma \ref{lem:covLapOnH} in hand, we are ready to prove Proposition \ref{prop:ODE}. \begin{proof} [Proof of Proposition \ref{prop:ODE}] We begin by expanding \begin{align*} {}^{(A)} \Box \phi - \phi = & - \frac{1}{\tau^{2}} {}^{(A)}\bfD_{\tau} (\tau^{2} {}^{(A)}\bfD_{\tau} \phi) - \phi + \triangle_{A, \mathcal H_{\tau}} \phi. \end{align*} Then by Lemma \ref{lem:covLapOnH}, we have \begin{align*} & \hskip-2em \np {}^{(A)}\bfD_{\tau}^{2} (\tau \cosh y \phi) + (\tau \cosh y \phi) \\ =& - \frac{\sinh^{2} y}{\tau \cosh y} \Big( \omega_{1}^{2} \bfZ_{02}^{2} + \omega_{2}^{2} \bfZ_{01}^{2} - \omega_{1} \omega_{2} (\bfZ_{01} \bfZ_{02} + \bfZ_{02} \bfZ_{01}) \Big)\phi \\ & + \frac{\cosh y}{\tau} (\bfZ_{01}^{2} + \bfZ_{02}^{2} )\phi - \frac{\sinh y}{\tau} (\omega_{1} \bfZ_{01} + \omega_{2} \bfZ_{02})\phi - (\tau \cosh y) ({}^{(A)} \Box \phi - \phi). \end{align*} Taking the inner product with ${}^{(A)}\bfD_{\tau} (\tau \cosh y \phi)$, the left-hand side becomes \begin{equation*} \frac{1}{2} \partial_{\tau} \Big( \abs{{}^{(A)}\bfD_{\tau} (\tau \cosh y \phi)}^{2} + \abs{\tau \cosh y \phi}^{2} \Big). \end{equation*} Integrating in $\tau$ from $\tau_{0}$ and using Cauchy-Schwarz, \eqref{eq:ODE} follows. \end{proof} \subsection{Gauge invariant interpolation inequalities with weights} In this subsection, we derive various interpolation inequalities involving $\bfZ$ and weights of the form $\cosh y$. \begin{lemma} \label{lem:noZ} Let $\phi$ be a smooth compactly supported $V$-valued function on $\mathcal H_{\tau}$. Then for $1 \leq r \leq p \leq q \leq \infty$ and $0 \leq \vartheta \leq 1$ defined by $\frac{1}{p} = \frac{1-\vartheta}{q} + \frac{\vartheta}{r}$, we have \begin{equation} \label{eq:noZ} \nrm{\cosh y \phi}_{L^{p}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \leq C \nrm{\cosh y \phi}^{1- \vartheta}_{L^{q}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \nrm{\cosh y \phi}^{\vartheta}_{L^{r}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \end{equation} \end{lemma} By scaling, we may take $\tau = 1$. Then this lemma is an easy consequence of H\"older's inequality with respect to the measure $(\cosh y)^{-1} \mathrm{d} \sgm_{\mathcal H_{1}}$. \begin{lemma}[Covariant Gagliardo-Nirenberg inequality with weights] \label{lem:Z} Let $\phi$ be a smooth compactly supported function on $\mathcal H_{\tau}$. Then for $2 \leq p, q, r \leq \infty$ and $\frac{2}{p} = \frac{1}{q} + \frac{1}{r}$, we have \begin{equation} \label{eq:Z} \begin{aligned} & \nrm{\cosh y \bfZ \phi}_{L^{p}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \\ & \quad \leq C \nrm{\cosh y \phi}_{L^{q}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})}^{\frac{1}{2}} \Big( \sum_{k: 0 \leq k \leq 2} \nrm{\cosh y \bfZ^{(k)} \phi}_{L^{r}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \Big)^{\frac{1}{2}} \end{aligned} \end{equation} \end{lemma} \begin{proof} To simplify the exposition, we will use the following notation: For $1 \leq r \leq \infty$ and $k \geq 0$ an integer, we will write \begin{equation*} \nrm{\cosh y \bfZ^{(\leq k)} \phi}_{L^{r}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} := \sum_{0 \leq k' \leq k} \nrm{\cosh y \bfZ^{(k')} \phi}_{L^{r}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})}. \end{equation*} Also, by scaling, it suffices to consider the case $\tau = 1$. Below, we will omit $\mathcal H_{1}$ and simply write $L^{p} = L^{p}(\mathcal H_{1}, \frac{\mathrm{d} \sgm}{\cosh y})$. Before we embark on the proof, we will introduce a few necessary ingredients. Our first ingredient is the following integration by parts formula on $\mathcal H_{1}$: For $Z = Z_{\mu \nu}$ ($\mu, \nu = 0, 1, 2$) and $f, g$ smooth compactly supported real-valued functions on $\mathcal H_{1}$, it holds that \begin{equation} \label{eq:intbyparts} \int_{\mathcal H_{1}} (Z f) g \, \mathrm{d} \sigma_{\mathcal H_{1}} = - \int_{\mathcal H_{1}} f Z g \, \mathrm{d} \sigma_{\mathcal H_{1}}. \end{equation} This identity, which is equivalent to saying that the orthogonal projection of $Z_{\mu \nu}$ to $\mathcal H_{1}$ is divergence-free, is an immediate consequence of the fact that $Z_{\mu \nu}$ is a Killing vector field in the ambient Minkowski space $\mathbb R^{1+2}$ that is tangent to $\mathcal H_{1}$. Observe also that \begin{equation*} \abs{Z (\cosh y)} \leq \cosh y. \end{equation*} A quick way to verify this inequality is to note that $\cosh y = \frac{t}{\tau}$ and \begin{equation*} \abs{Z_{\mu \nu} (\frac{t}{\tau})} = \frac{1}{\tau}\abs{x_{\mu} \delta_{\nu}^{0} - x_{\nu} \delta_{\mu}^{0} } \leq \frac{t}{\tau} \hbox{ on } \mathcal H_{1}. \end{equation*} With these preparations, we are now ready to prove \eqref{eq:Z}. With \eqref{eq:intbyparts}, as well as the Leibniz rule \eqref{eq:leibniz-V}, this inequality can be easily shown using H\"{o}lder's inequality as follows: Writing $p = 2+ 2b$, where $b \geq 0$ since $p \geq 2$, we have \begin{align*} \wnrm{\cosh y \bfZ \phi}_{L^{p}}^{p} =& \int (\cosh y)^{p-1} \brk{\bfZ \phi, \bfZ\phi} \brk{\bfZ \phi, \bfZ \phi}^{b} \, \mathrm{d} \sgm \\ = & - \int (\cosh y)^{p-1} \brk{\phi, \bfZ^{2} \phi} \brk{\bfZ \phi, \bfZ \phi}^{b} \, \mathrm{d} \sgm \\ & + \int (\cosh y)^{p-1} Z\brk{\phi, \bfZ \phi} \brk{\bfZ \phi, \bfZ \phi}^{b} \, \mathrm{d} \sgm \\ = & - \int (\cosh y)^{p} \brk{\phi, \bfZ^{2} \phi} \brk{\bfZ \phi, \bfZ \phi}^{b} \, \frac{\mathrm{d} \sgm}{\cosh y} \\ & - 2 b \int (\cosh y)^{p} \brk{\phi, \bfZ \phi} \brk{\bfZ \phi, \bfZ \phi}^{b-1} \brk{\bfZ \phi, \bfZ^{2} \phi} \, \frac{\mathrm{d} \sgm}{\cosh y} \\ & - (p-1) \int (\cosh y)^{p-1} \abs{Z (\cosh y)} \brk{\phi, \bfZ \phi} \brk{\bfZ \phi, \bfZ \phi}^{b} \, \frac{\mathrm{d} \sgm}{\cosh y}. \end{align*} Then the absolute value of the last expression is bounded from the above by \begin{equation*} \leq C \wnrm{\cosh y \phi}_{L^{q}} \wnrm{\cosh y\bfZ^{(\leq 2)} \phi}_{L^{r}} \wnrm{\cosh y \bfZ \phi}_{L^{p}}^{p-2}, \end{equation*} from which \eqref{eq:Z} follows. \qedhere \end{proof} \begin{lemma} \label{lem:GN} Let $\phi$ be a smooth compactly supported function on $\mathcal H_{\tau}$. For $0 \leq k \leq m-1$, we have \begin{equation} \label{eq:GN} \begin{aligned} & \sum_{0 \leq \ell \leq k} \| \cosh y \, \bfZ^{(\ell)} \phi \|_{L^{p}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \\ & \qquad \leq C \| \cosh y \phi \|^{1- \frac{k}{m}}_{L^{\infty}(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \Big( \sum_{0 \leq \ell \leq m} \| \cosh y \, \bfZ^{(\ell)} \phi\|_{L^2(\mathcal H_{\tau}, \frac{\mathrm{d} \sgm}{\cosh y})} \Big)^{\frac{k}{m}} \end{aligned} \end{equation} where $\frac{1}{p} = \frac{k}{2m}$. \end{lemma} \begin{proof} We will use the same notation and convention as the previous proof. Fix $m \geq 1$. We claim that the following holds: For $1 \leq k \leq m-1$ and $r_{k} := \frac{2m}{k}$, \begin{equation} \label{eq:Zk} \wnrm{\cosh y \bfZ^{(\leq k)} \phi}_{L^{r_{k}}} \leq C \wnrm{\cosh y \phi}_{L^{\infty}}^{\frac{1}{k+1}} \wnrm{\cosh y \bfZ^{(\leq k+1)} \phi}_{L^{r_{k+1}}}^{\frac{k}{k+1}}\, . \end{equation} Indeed, \eqref{eq:GN} would follow from \eqref{eq:Zk} by induction on $k$. To prove the inequality \eqref{eq:Zk}, we use a separate induction argument on $k$. The $k=1$ case follows from \eqref{eq:noZ} and \eqref{eq:Z} by taking $p = 2m$, $q = \infty$ and $r =m$. Next, assume that \eqref{eq:Zk} holds for some integer $k-1$ such that $1\leq k-1 \leq m-1$. Then for every integer $\ell$ satisfying $1 \leq \ell \leq k$, by \eqref{eq:Z} and the induction hypothesis, we have \begin{align} \label{eq:GN:pf:1} &\wnrm{\cosh y \bfZ^{(\ell)} \phi}_{L^{r_{k}}} \notag \\ & \quad \leq C \wnrm{\cosh y \bfZ^{(\ell+1)} \phi}_{L^{r_{k+1}}}^{\frac{1}{2}} \wnrm{\cosh y \bfZ^{(\ell-1)} \phi}_{L^{r_{k-1}}}^{\frac{1}{2}} \\ & \quad \leq C \wnrm{\cosh y \bfZ^{(\leq k+1)} \phi}_{L^{r_{k+1}}}^{\frac{1}{2}} \Big( \wnrm{\cosh y \bfZ^{(\leq k)} \phi}_{L^{r_{k}}}^{\frac{k-1}{k}} \wnrm{\cosh y \phi}_{L^{\infty}}^{\frac{1}{k}} \Big)^{\frac{1}{2}}. \notag \end{align} For $\ell = 0$, we have \begin{equation} \label{eq:GN:pf:2} \wnrm{\cosh y \phi}_{L^{r_{k}}} \leq C \wnrm{\cosh y \phi}_{L^{\infty}}^{\frac{1}{k+1}} \wnrm{\cosh y \phi}_{L^{r_{k+1}}}^{\frac{k}{k+1}} \end{equation} by \eqref{eq:noZ}. Summing up \eqref{eq:GN:pf:1} for $1 \leq \ell \leq k$ and \eqref{eq:GN:pf:2}, we arrive at \begin{align*} & \wnrm{\cosh y \bfZ^{(\leq k)} \phi}_{L^{r_{k}}} \\ & \quad \leq C \wnrm{\cosh y \bfZ^{(\leq k+1)} \phi}_{L^{r_{k+1}}}^{\frac{1}{2}} \Big( \wnrm{\cosh y \bfZ^{(\leq k)} \phi}_{L^{r_{k}}}^{\frac{k-1}{k}} \wnrm{\cosh y \phi}_{L^{\infty}}^{\frac{1}{k}} \Big)^{\frac{1}{2}}, \end{align*} which then implies the desired estimate for $k$. \end{proof} \section{Commutation relations and structure of the equations} \label{sec:comm} The purpose of this section is to compute the equation satisfied by $\bfZ^{(k)} \phi$, using the commutation properties of \eqref{eq:CSH} and \eqref{eq:CSD} with respect to $\bfZ_{\mu \nu}$. This is the main algebraic ingredient of our proof of Theorems~\ref{thm:CSH} and \ref{thm:CSD}. The main tool for our computation is the formalism of exterior differential calculus for $V$- and $\mathfrak{g}$-valued forms introduced in Section~\ref{subsec:extr-calc}, which is summarized and further developed in Section~\ref{subsec:extr-calc-2}. Then in Section~\ref{subsec:comm-covBox}, we compute the commutator between ${}^{(A)} \Box-1$ and $\bfZ_{\mu \nu}$, under the Chern--Simons equation $F = \star J$. The (general order) commutator can be expressed in terms of covariant Lie derivatives of the current $J$ (i.e., ${}^{(A)}\calL_{Z}^{(k)} J$); the latter is computed for \eqref{eq:CSH} and \eqref{eq:CSD} in Sections~\ref{subsec:comm-CSH} and \ref{subsec:comm-CSD}, respectively. In Section~\ref{subsec:comm-U}, we establish commutation properties of the $V$-valued potential $U(\phi)$. Finally, in Section~\ref{subsec:ptwise}, we provide rudimentary pointwise bounds for various expressions introduced in this section, which will be basic to the analysis performed in Section~\ref{sec:BA}. \subsection{More on the exterior differential calculus} \label{subsec:extr-calc-2} This section is a continuation of Section~\ref{subsec:extr-calc}. We begin by summarizing the key formulae of the exterior differential calculus of real-valued differential forms. \begin{lemma}[Exterior differential calculus] \label{lem:extr-calc} Given a real-valued $k$-form $\omega$ and vector fields $X$, $Y$, the following identities hold. \begin{align} [\calL_{X}, \iota_{Y}] \omega = & \iota_{[X,Y]} \omega ,\\ [\calL_{X}, \calL_{Y}] \omega = & \calL_{[X,Y]} \omega , \\ [\calL_{X}, \mathrm{d}] \omega =& 0, \\ \mathrm{d}^{2} \omega =& 0. \end{align} The following identity, called \emph{Cartan's formula}, also holds. \begin{align} \iota_{X} \mathrm{d} \omega + \mathrm{d} \iota_{X} \omega =& \calL_{X} \omega. \label{eq:cartan-eq} \end{align} Moreover, given a real-valued $\ell$-form $\omega'$, the following Leibniz rules hold. \begin{align} \calL_{X} (\omega \wedge \omega') =& (\calL_{X} \omega) \wedge \omega' + \omega \wedge \calL_{X} \omega' \\ \iota_{X} (\omega \wedge \omega') = & (\iota_{X} \omega) \wedge \omega' + (-1)^{k} \omega \wedge \iota_{X} \omega' \\ \mathrm{d} (\omega \wedge \omega') = & (\mathrm{d} \omega) \wedge \omega' + (-1)^{k} \omega \wedge \mathrm{d} \omega'. \end{align} \end{lemma} Along with the facts that $\mathrm{d} f$ is the usual differential on functions and $\iota_{X} \mathrm{d} f = X f$, these identities completely characterize the operations $\calL_{X}$, $\iota_{X}$ and $\mathrm{d}$. Analogous calculus rules hold for $V$-valued differential forms. \begin{lemma} \label{lem:extr-calc-V} Given a $V$-valued $k$-form $v$ and a real-valued $\ell$-form $\omega$, we have \begin{equation} \omega \wedge v = (-1)^{k \ell} v \wedge \omega. \end{equation} Let $A$ be a connection 1-form and $F$ the associated curvature 2-form. For any vector fields $X, Y$, the following identities hold. \begin{align} [{}^{(A)}\calL_{X}, \iota_{Y}] v = & \iota_{[X,Y]} v ,\label{eq:covLDiX}\\ [{}^{(A)}\calL_{X}, {}^{(A)}\calL_{Y}] v = & {}^{(A)}\calL_{[X,Y]} v + (\iota_{Y} \iota_{X} F) v, \\ [{}^{(A)}\calL_{X}, {}^{(A)}\ud] v =& (\iota_{X} F) \wedge v, \label{eq:covLDcovud},\\ {}^{(A)}\ud^{2} v =& F \wedge v. \label{eq:covud-covud} \end{align} The following version of Cartan's formula also holds. \begin{align} \iota_{X} {}^{(A)}\ud v + {}^{(A)}\ud \iota_{X} v =& {}^{(A)}\calL_{X} v. \label{eq:cartan-eq-V} \end{align} Finally, given an additional real-valued $\ell$-form $\omega$, the following Leibniz rules hold. \begin{align} {}^{(A)}\calL_{X} (v \wedge \omega) =& ({}^{(A)}\calL_{X} v) \wedge \omega + v \wedge \calL_{X} \omega \label{eq:leibniz-LD} \\ \iota_{X} (v \wedge \omega) = & (\iota_{X} v) \wedge \omega + (-1)^{k} v \wedge \iota_{X} \omega \\ {}^{(A)}\ud (v \wedge \omega) = & ({}^{(A)}\ud v) \wedge \omega + (-1)^{k} v \wedge \mathrm{d} \omega. \label{eq:leibniz-covud} \end{align} \end{lemma} The key difference from the real-valued case is that ${}^{(A)}\ud^{2} \neq 0$, but instead \eqref{eq:covud-covud} holds. When $v$ is a $0$-form (i.e., a $V$-valued function), this is precisely the definition of the curvature 2-form $F$. Then the general case of a $k$-form follows from \eqref{eq:leibniz-covud}, which in turn is straightforward. The proof of the rest of the lemma is more routine, using the definitions \eqref{eq:covud} and \eqref{eq:covLD}, as well as Lemma~\ref{lem:extr-calc}; we omit the details. For a $\mathfrak{g}$-valued $k$-form $a$, the covariant differential ${}^{(A)}\ud a$ and the covariant Lie derivative ${}^{(A)}\calL_{X} a$ are defined using the adjoint action. The formulae in Lemma~\ref{lem:extr-calc-V} in hold verbatim for $\mathfrak{g}$-valued differential forms. Moreover, it is clear that the following additional Leibniz rules hold. \begin{lemma} \label{lem:leibniz-gV} Let $A$ be a connection 1-form. Given a $\mathfrak{g}$-valued $k$-form $a$ and a $V$-valued $\ell$-form $v$, the following Leibniz rules hold. \begin{align} {}^{(A)}\calL_{X} (a \wedge v) =& ({}^{(A)}\calL_{X} a) \wedge v + a \wedge {}^{(A)}\calL_{X} v \\ \iota_{X} (a \wedge v) = & (\iota_{X} a) \wedge v + (-1)^{k} a \wedge (\iota_{X} v) \\ {}^{(A)}\ud (a \wedge v) = & ({}^{(A)}\ud a) \wedge v + (-1)^{k} a \wedge ({}^{(A)}\ud v). \end{align} In particular, these Leibniz rules hold in the case $V = \mathfrak{g}$, where $v = b$ is a $\mathfrak{g}$-valued $\ell$-form. In this case, we have \begin{equation} [a \wedge b] = (-1)^{k \ell+1} [b \wedge a]. \end{equation} \end{lemma} Next, we introduce some useful definitions for computations concerning the current $J$, performed in Sections~\ref{subsec:comm-CSH} and \ref{subsec:comm-CSD}. For a pair $\phi^{1}, \phi^{2} \in V$, let \begin{equation} \label{eq:bbrk-def} \bbrk{\phi^{1}, \phi^{2}} = \frac{1}{2} \Big( \brk{\mathcal T \phi^{1}, \phi^{2}} + \brk{\phi^{2}, \mathcal T \phi^{1}} \Big). \end{equation} Observe that $\bbrk{\phi^{1}, \phi^{2}}$ is a $\mathfrak{g}$-valued bilinear (over $\mathbb R$) form in $\phi^{1}, \phi^{2}$, which is anti-symmetric thanks to the anti-hermitian property of $\mathcal T$. It obeys the following important Leibniz rule. \begin{lemma} \label{lem:leibniz-bbrk} Let $A$ be a connection 1-form. Given $V$-valued functions $\phi^{1}, \phi^{2}$ and a vector field $X$, the following Leibniz rule holds. \begin{equation} \label{eq:leibniz-bbrk} {}^{(A)}\bfD_{X} \bbrk{\phi^{1}, \phi^{2}} = \bbrk{ {}^{(A)}\bfD_{X} \phi^{1}, \phi^{2}} + \bbrk{\phi^{1}, {}^{(A)}\bfD_{X} \phi^{2}}. \end{equation} \end{lemma} \begin{proof} By symmetry, it suffices to prove \begin{equation} \label{eq:leibniz-bbrk-key} {}^{(A)}\bfD_{X} \brk{\mathcal T \phi^{1}, \phi^{2}} = \brk{ \mathcal T {}^{(A)}\bfD_{X} \phi^{1}, \phi^{2}} + \brk{\mathcal T \phi^{1}, {}^{(A)}\bfD_{X} \phi^{2}}. \end{equation} Recall that ${}^{(A)}\bfD_{X} \phi = \nabla_{X} \phi + A(X) \cdot \phi$ on a $V$- (or $\mathfrak{g}$-)valued function $\phi$. We introduce the shorthand $a = A(X)$. Fix an orthonormal basis $\set{e_{A}}$ of $\mathfrak{g}$, so that $\mathcal T^{A'} \varphi = \delta^{A'A} e_{A} \cdot \varphi$ and $a = a^{A} e_{A}$. We denote the structure constants by $\set{c_{AB}^{C}} \subseteq \mathbb R$, where $\LieBr{e_{A}} {e_{B}} = c_{AB}^{C} e_{C}$. Using \eqref{eq:leibniz-V} and the above conventions, the left-hand side of \eqref{eq:leibniz-bbrk-key} equals \begin{align*} & \hskip-2em \delta^{A A'} \brk{e_{A} \cdot \phi^{1}, \phi^{2}} [a, e_{A'}] + \delta^{A A'} \Big( \brk{{}^{(A)}\bfD_{X} (e_{A} \cdot \phi^{1}), \phi^{2}} + \brk{e_{A} \cdot \phi^{1}, {}^{(A)}\bfD_{X} \phi^{2}} \Big) e_{A'} \\ = & \delta^{A A'} \brk{e_{A} \cdot \phi^{1}, \phi^{2}} [a, e_{A'}] + \delta^{A A'} \brk{[a, e_{A}] \cdot \phi^{1}, \phi^{2}} e_{A'} \\ & + \delta^{A A'} \Big( \brk{e_{A} \cdot {}^{(A)}\bfD_{X} \phi^{1}, \phi^{2}} + \brk{e_{A} \cdot \phi^{1}, {}^{(A)}\bfD_{X} \phi^{2}} \Big) e_{A'}. \end{align*} Note that the last line is exactly the right-hand side of \eqref{eq:leibniz-bbrk-key}. Hence the difference between the left- and the right-hand sides of \eqref{eq:leibniz-bbrk-key} is equal to \begin{align*} & \hskip-2em \delta^{A A'} \brk{e_{A} \cdot \phi^{1}, \phi^{2}} \LieBr{a}{e_{A'}} + \delta^{A A'} \brk{\LieBr{a}{e_{A}} \cdot \phi^{1}, \phi^{2}} e_{A'} \\ = & a^{C} c_{CA'}^{D} \delta^{A A'} \brk{e_{A} \cdot \phi^{1}, \phi^{2}} e_{D} + a^{C} c_{CA}^{D} \delta^{A A'} \brk{e_{D} \cdot \phi^{1}, \phi^{2}} e_{A'} \\ = & (c_{CD}^{A'} \delta^{D A} + c_{CD}^{A} \delta^{D A'}) a^{C} \brk{e_{A} \cdot \phi^{1}, \phi^{2}} e_{A'}. \end{align*} Therefore, to establish \eqref{eq:leibniz-bbrk-key}, it suffices to show \begin{equation} \label{eq:str-const} c_{CD}^{A'} \delta^{D A} + c_{CD}^{A} \delta^{D A'} = 0. \end{equation} The identity \eqref{eq:str-const} is a consequence of the bi-invariance of $\brk{\cdot, \cdot}_{\mathfrak{g}}$, i.e., \begin{equation*} \brk{[e_{C}, e_{A'}], e_{A}}_{\mathfrak{g}} +\brk{[e_{C}, e_{A}], e_{A'}}_{\mathfrak{g}} = 0. \end{equation*} This completes the proof. \qedhere \end{proof} \begin{remark} In the case of abelian Chern--Simons--Higgs case (Example \ref{ex:a-CSH}), Lemma \ref{lem:leibniz-bbrk} is verified by simply computing \[ {}^{(A)}\bfD_X ( i\phi^1 \overline{ \phi^2} - i\overline{ \phi^1} \phi^2) =i {}^{(A)}\bfD_X \phi^1 \overline{ \phi^2} - i\overline {{}^{(A)}\bfD_X \phi^1}\phi^2 + i \phi^1 \overline{ {}^{(A)}\bfD_X\phi^2 }- i\overline{ \phi^1} {{}^{(A)}\bfD_X \phi^2}.\] \end{remark} The anti-symmetric form $\bbrk{\cdot, \cdot}$ induces a $\mathfrak{g}$-valued wedge product $\bbrk{v^{1} \wedge v^{2}}$ of $V$-valued forms, characterized by the relation \begin{equation} \label{eq:bbrk-wedge} \bbrk{(\phi^{1} \otimes \omega^{1}) \wedge (\phi^{2} \otimes \omega^{2})} = \bbrk{\phi^{1}, \phi^{2}} \otimes (\omega^{1} \wedge \omega^{2}) \end{equation} for $V$-valued functions $\phi^{1}, \phi^{2}$ and real-valued differential forms $\omega^{1}, \omega^{2}$. \begin{lemma} \label{lem:extr-calc-bbrk} Let $v, w$ be $V$-valued $k$- and $\ell$-forms, respectively. Then we have \begin{equation} \label{eq:bbrk-comm} \bbrk{v \wedge w} = (-1)^{k \ell + 1} \bbrk{w \wedge v}. \end{equation} Moreover, let $A$ be a connection 1-form. For any vector field $X$, the following Leibniz rules hold. \begin{align} {}^{(A)}\calL_{X} \bbrk{v \wedge w} = & \bbrk{{}^{(A)}\calL_{X} v \wedge w} + \bbrk{v \wedge {}^{(A)}\calL_{X} w}, \label{eq:leibniz-bbrk-LD} \\ \iota_{X} \bbrk{v \wedge w} = & \bbrk{\iota_{X} v \wedge w} + (-1)^{k} \bbrk{v \wedge \iota_{X} w}, \label{eq:leibniz-bbrk-iota} \\ {}^{(A)}\ud \bbrk{v \wedge w} = & \bbrk{{}^{(A)}\ud v \wedge w} + (-1)^{k} \bbrk{v \wedge {}^{(A)}\ud w}. \label{eq:leibniz-bbrk-d} \end{align} \end{lemma} \begin{proof} Identities \eqref{eq:bbrk-comm} and \eqref{eq:leibniz-bbrk-iota} are immediate from the defining relation \eqref{eq:bbrk-wedge}, whereas \eqref{eq:leibniz-bbrk} and \eqref{eq:leibniz-bbrk-d} follow from Lemmas~\ref{lem:extr-calc}, \ref{lem:leibniz-bbrk}. We omit the routine details. \qedhere \end{proof} To summarize, we have defined $V$-valued wedge products $v \wedge w$, $a \wedge v $ and $\mathfrak{g}$-valued wedge products $[a \wedge a']$, $\bbrk{v\wedge w}$ for $v, w \in V$ and $ a, a' \in \mathfrak{g}$, which obey appropriate Leibniz rules with respect to ${}^{(A)}\calL _X$, $\iota_X$ and ${}^{(A)}\ud$. Note that in Lemma \ref{lem:leibniz-bbrk} and Lemma \ref{lem:extr-calc-bbrk}, such rules hold with covariant derivatives on both sides (in contrast to, say, \eqref{eq:leibniz-V} and \eqref{eq:leibniz-g}). We now turn to the properties concerning the Hodge star operator $\star$. In order to state the properties, we need a few definitions. First, we define the \emph{(exterior) codifferential} operator $\delta$ for real-valued differential forms by the formula \begin{align} \int \eta^{-1}(\mathrm{d} \omega^{1}, \omega^{2}) \, \mathrm{d} \sgm_{\mathbb R^{1+2}} =& \int \eta^{-1}(\omega^{1}, \delta \omega^{2}) \, \mathrm{d} \sgm_{\mathbb R^{1+2}}, \label{eq:dlt} \end{align} where $\omega^{1}$ and $\omega^{2}$ are real-valued $k$- and $k+1$-forms, respectively. The \emph{covariant codifferential} ${}^{(A)}\dlt$ of a $V$-valued differential form $v$ is defined similarly using ${}^{(A)}\ud$ and the Minkowski metric for $V$-valued differential forms, which is naturally defined using $\brk{\cdot, \cdot}_{V}$ (for explicit alternative formulae for $\delta$ and ${}^{(A)}\dlt$, see \eqref{eq:star-dlt} below). Given a vector field $X$, we will denote its metric dual 1-form by $X^{\flat}$, i.e., $X^{\flat}_{b} := X^{a} \eta_{ab}$. In the following two lemmas, we record some useful properties of $\star$. \begin{lemma} \label{lem:star} Let the base manifold be $\mathbb R^{1+2}$ with the Minkowski metric with signature $(-1, +1, +1)$. Given a real-valued $k$-form $\omega$ and a vector field $X$, we have \begin{align} \star \star \omega =& - \omega, \label{eq:star-star} \\ \iota_{X} \star \omega =& \star (\omega \wedge X^{\flat}), \label{eq:star-iX}\\ \delta \omega =& (-1)^{k+1} \star \mathrm{d} \star \omega. \label{eq:star-dlt} \end{align} Moreover, if $Z$ is a Killing vector field, then $\calL_{Z}$ commutes with $\star$ and $\flat$, i.e., \begin{align} \calL_{Z} \star \omega =& \star \calL_{Z} \omega, \label{eq:star-LDZ}\\ \calL_{Z} X^{\flat} =& (\calL_{Z} X)^{\flat} = [Z, X]^{\flat}. \label{eq:star-flat} \end{align} Finally, the formulae \eqref{eq:star-star}--\eqref{eq:star-LDZ} hold for any $V$- or $\mathfrak{g}$-valued $k$-form, where $\delta$, $\mathrm{d}$ and $\calL_{Z}$ are replaced by the covariant counterparts ${}^{(A)}\dlt$, ${}^{(A)}\ud$ and ${}^{(A)}\calL_{Z}$. \end{lemma} \begin{proof} The identity \eqref{eq:star-star} is a quick consequence of the definition \eqref{eq:star-def}, whereas \eqref{eq:star-iX} follows from the fact that $\iota_{X}$ is dual to $X^{\flat} \wedge$, i.e., \begin{equation*} \eta^{-1}(\iota_{X} \omega^{1}, \omega^{2}) = \eta^{-1}(\omega^{1}, X^{\flat} \wedge \omega^{2}). \end{equation*} The identity \eqref{eq:star-dlt} follows from \eqref{eq:dlt}. For \eqref{eq:star-LDZ} and \eqref{eq:star-flat}, note that if $Z$ is Killing then \begin{equation*} \calL_{Z} \eta = 0, \quad \calL_{Z} \eta^{-1} = 0, \quad \calL_{Z} \epsilon = 0. \end{equation*} From these facts, \eqref{eq:star-flat} follows immediately. To prove \eqref{eq:star-LDZ}, we compute \begin{align*} \calL_{Z} \big( \eta^{-1} (\omega^{1} , \omega^{2}) \epsilon \big) = & \eta^{-1}(\calL_{Z} \omega^{1}, \omega^{2}) \epsilon + \eta^{-1} (\omega^{1}, \calL_{Z} \omega^{2}) \epsilon \\ = & \calL_{Z} \omega^{1} \wedge \star \omega^{2} + \omega^{1} \wedge \star \calL_{Z} \omega^{2}, \\ \calL_{Z} \big( \omega^{1} \wedge \star \omega^{2} \big) = & \calL_{Z} \omega^{1} \wedge \star \omega^{2} + \omega^{1} \wedge \calL_{Z} \star \omega^{2}, \end{align*} and observe that the two left-hand sides are equal by \eqref{eq:star-def}. Finally, note that the same proof goes through for $V$- or $\mathfrak{g}$-valued differential forms. \end{proof} \begin{lemma} \label{lem:star-aux} Let the base manifold be $\mathbb R^{1+2}$ with the Minkowski metric with signature $(-1, +1, +1)$. Given a $\mathfrak{g}$-valued $k$-form $a$ and a $V$-valued $k$-form $v$ $(0 \leq k \leq 3)$, we have \begin{equation} \label{eq:star-aux:k-form} a \wedge \star v = \star a \wedge v. \end{equation} Moreover, if $\phi$ is a $V$-valued 0-form (i.e., a $V$-valued function), then \begin{equation} \label{eq:star-aux:0-form} (\star a) \wedge \phi = \star (a \wedge \phi). \end{equation} \end{lemma} \begin{proof} This lemma follows from the real-valued counterparts \begin{equation*} \omega_{1} \wedge \star \omega_{2} = \star \omega_{1} \wedge \omega_{2}, \quad (\star \omega_{1}) \wedge f = \star (\omega_{1} \wedge f), \end{equation*} which is easily seen to hold for real-valued $k$-forms $\omega_{1}, \omega_{2}$ and a $0$-form (i.e., a real-valued function) $f$. \qedhere \end{proof} In view of the characterizing relation \eqref{eq:star-def} of $\star$, we \emph{define} the real-valued bilinear form $\eta^{-1}(a \cdot v)$ for a $\mathfrak{g}$-valued $k$-form $a$ and a $V$-valued $k$-form $v$ $(0 \leq k \leq 2)$ so that \begin{align}\label{eq:star-gV} a \wedge \star v = \star a \wedge v = \eta^{-1}(a\cdot v) \epsilon. \end{align} The d'Alembertian operator $\Box$ can be expressed in terms of $\mathrm{d}$ and $\star$ as follows. Recalling the definition of the divergence, \eqref{eq:dlt} and \eqref{eq:star-dlt}, for any real-valued 1-form $\omega$, we have \begin{equation*} \mathrm{div} \, \omega = - \delta \omega = - \star \mathrm{d} \star \omega. \end{equation*} Hence, it follows that $\Box f = \mathrm{div} (\mathrm{d} f) = - \star \mathrm{d} \star f$. By an entirely analogous computation, the covariant d'Alembertian operator can be expressed in terms of ${}^{(A)}\ud$ and $\star$ as well. We record this result as a lemma. \begin{lemma} \label{lem:covBox} Given a $V$-valued function $\phi$, we have \begin{equation} \label{eq:covBox} {}^{(A)} \Box \phi = - \star {}^{(A)}\ud \star {}^{(A)}\ud \phi. \end{equation} \end{lemma} Because of the Chern--Simons equation $F = \star J$, expressions of the form $\iota_{Z} \star$ often need to be considered. Our final lemma in this subsection is a technical result, which can be used to compute the commutator between ${}^{(A)}\ud$ (or ${}^{(A)}\dlt$) and $\iota_{Z} \star$. \begin{lemma} \label{lem:d-i-star} Let $Z$ be a Killing vector field and $v$ be a $V$-valued $k$-form. \begin{align} {}^{(A)}\ud \, \iota_{Z} \star v =& (-1)^{k+1} \iota_{Z} \star {}^{(A)}\dlt v + \star {}^{(A)}\calL_{Z} v, \label{eq:d-i-star}\\ {}^{(A)}\dlt \, \iota_{Z} \star v =& (-1)^{k} \iota_{Z} \star {}^{(A)}\ud v + \star (v \wedge \mathrm{d} Z^{\flat}). \label{eq:dlt-i-star} \end{align} \end{lemma} \begin{proof} For \eqref{eq:d-i-star}, we compute using Lemmas~\ref{lem:extr-calc-V} and \ref{lem:star} as follows: \begin{align*} {}^{(A)}\ud \, \iota_{Z} \star v =& - \iota_{Z} {}^{(A)}\ud \star v + {}^{(A)}\calL_{Z} \star v \\ =& (-1)^{k+1} \iota_{Z} \star {}^{(A)}\dlt v + \star {}^{(A)}\calL_{Z} v. \end{align*} Similarly, for \eqref{eq:dlt-i-star}, we again use Lemmas~\ref{lem:extr-calc-V} and \ref{lem:star} to compute \begin{align*} {}^{(A)}\dlt \, \iota_{Z} \star v =& (-1)^{3 - k} \star {}^{(A)}\ud \star \iota_{Z} \star v \\ =& (-1)^{k} \star {}^{(A)}\ud (v \wedge Z^{\flat}) \\ =& (-1)^{k} \iota_{Z} \star {}^{(A)}\ud v + \star (v \wedge \mathrm{d} Z^{\flat}). \qedhere \end{align*} \end{proof} \subsection{Commutation relation for the covariant Klein--Gordon operator} \label{subsec:comm-covBox} Our goal here is to compute the commutator between the covariant Klein--Gordon operator ${}^{(A)} \Box - 1$ and $\bfZ^{(k)}$. The basic computation is contained in the following lemma. \begin{lemma} \label{lem:comm-covBox} Let $J$ be a $\mathfrak{g}$-valued 1-form, and $A$ be a connection 1-form satisfying the Chern--Simons equation $F = \star J$. Then given any $V$-valued function (viewed as a 0-form) $\phi$ and a Killing vector field $Z$, we have \begin{equation} \label{eq:comm-covBox} [\bfZ, {}^{(A)} \Box] \phi = \iota_{Z} \star {}^{(A)}\ud (J \wedge \phi) - \iota_{Z} \star (J \wedge {}^{(A)}\ud \phi) - \star (J \wedge \phi \wedge \mathrm{d} Z^{\flat}). \end{equation} Moreover, given in addition a $\mathfrak{g}$-valued 1-form $\Gamma$ and Killing vector fields $Z_{1}$ and $Z_{2}$, we have \begin{align} \bfZ_{2} (\iota_{Z_{1}} \star {}^{(A)}\ud (\Gamma \wedge \phi)) = & \iota_{Z_{1}} \star {}^{(A)}\ud({}^{(A)}\calL_{Z_{2}} \Gamma \wedge \phi) + \iota_{Z_{1}} \star {}^{(A)}\ud(\Gamma \wedge \bfZ_{2} \phi) \label{eq:comm-covBox-1} \\ & + \iota_{[Z_{2}, Z_{1}]} \star {}^{(A)}\ud (\Gamma \wedge \phi) + \eta^{-1}(J \wedge Z_{2}^{\flat} \cdot (\Gamma \wedge \phi \wedge Z_{1}^{\flat})), \notag \\ \bfZ_{2} (- \iota_{Z_{1}} \star (\Gamma \wedge {}^{(A)}\ud \phi)) = & - \iota_{Z_{1}} \star ({}^{(A)}\calL_{Z_{2}} \Gamma \wedge {}^{(A)}\ud \phi) - \iota_{Z_{1}} \star (\Gamma \wedge {}^{(A)}\ud (\bfZ_{2} \phi)) \label{eq:comm-covBox-2}\\ & - \iota_{[Z_{2}, Z_{1}]} \star(\Gamma \wedge {}^{(A)}\ud \phi) + \eta^{-1}(\Gamma \wedge Z_{1}^{\flat} \cdot (J \wedge Z_{2}^{\flat} \wedge \phi)), \notag \\ \bfZ_{2} \star (\Gamma \wedge \phi \wedge \mathrm{d} Z_{1}^{\flat}) = & \star ({}^{(A)}\calL_{Z_{2}} \Gamma \wedge \phi \wedge \mathrm{d} Z_{1}^{\flat}) \label{eq:comm-covBox-3} + \star (\Gamma \wedge \bfZ_{2} \phi \wedge \mathrm{d} Z_{1}^{\flat})\\ & + \star (\Gamma \wedge \phi \wedge \mathrm{d} [Z_{2}, Z_{1}]^{\flat}). \notag \end{align} \end{lemma} We postpone the proof until the end of this section, and proceed to the computation of $[\bfZ^{(k)}, {}^{(A)} \Box - 1]$. Given a $\mathfrak{g}$-valued 1-form $\Gamma$, a $V$-valued function $\phi$ and a vector field $Z$, define \begin{align} \mathfrak{N}_{1}[\Gamma, \phi; Z] = & \iota_{Z} \star ( ({}^{(A)}\ud \Gamma) \wedge \phi ), \label{eq:frkN-1} \\ \mathfrak{N}_{2}[\Gamma, \phi; Z] = & - 2 \iota_{Z} \star (\Gamma \wedge {}^{(A)}\ud \phi) , \label{eq:frkN-2}\\ \mathfrak{N}_{3}[\Gamma, \phi; Z] = & - \star (\Gamma \wedge \phi \wedge \mathrm{d} Z^{\flat}) . \label{eq:frkN-3} \end{align} Note that, by the Leibniz rule (Lemma~\ref{lem:leibniz-gV}), the sum $\mathfrak{N}_{1} + \mathfrak{N}_{2}$ satisfies \begin{equation} \label{eq:N12-leibniz} \iota_{Z} \star {}^{(A)}\ud (\Gamma \wedge \phi) - \iota_{Z} \star (\Gamma \wedge {}^{(A)}\ud \phi) = \mathfrak{N}_{1}[\Gamma, \phi; Z] + \mathfrak{N}_{2}[\Gamma, \phi; Z]. \end{equation} The reason why we split \eqref{eq:N12-leibniz} into $\mathfrak{N}_{1}$ and $\mathfrak{N}_{2}$, rather than $\iota_{Z} \star {}^{(A)}\ud (\Gamma \wedge \phi)$ and $- \iota_{Z} \star (\Gamma \wedge {}^{(A)}\ud \phi)$, is simply because the former pair turns out to be more convenient to estimate. For $\mathfrak{g}$-valued 1-forms $\Gamma^{1}, \Gamma^{2}$, a $V$-valued function $\phi$ and vector fields $Z_{1}, Z_{2}$, we also define \begin{equation} \label{eq:frkN-4} \begin{aligned} \mathfrak{N}_{4}[\Gamma^{1}, \Gamma^{2}, \phi; Z_{1}, Z_{2}] =& \eta^{-1}(\Gamma^{2} \wedge Z_{2}^{\flat} \cdot (\Gamma^{1} \wedge \phi \wedge Z_{1}^{\flat})) \\ & + \eta^{-1}(\Gamma^{1} \wedge Z_{1}^{\flat} \cdot (\Gamma^{2} \wedge \phi \wedge Z_{2}^{\flat})) . \end{aligned} \end{equation} In application, keeping track of the exact Killing vector field $Z$ (or $Z_{1}, Z_{2}$) involved in these formulae is not important. Accordingly, in what follows we often use the simple schematic notation \begin{equation*} \mathfrak{N}_{j}[\Gamma, \phi] = \mathfrak{N}_{j}[\Gamma, \phi; Z] \quad (j=1,2,3), \quad \mathfrak{N}_{4}[\Gamma^{1}, \Gamma^{2}, \phi] = \mathfrak{N}_{4}[\Gamma^{1}, \Gamma^{2}, \phi; Z_{1}, Z_{2}] \end{equation*} where $Z$, $Z_{1}$ and $Z_{2}$ are understood to be one of the vector fields $Z_{\mu \nu}$. With this convention in mind, we are finally able to state the main result of this section in a fairly compact form. \begin{proposition} \label{prop:comm-covKG} Let $J$ be a $\mathfrak{g}$-valued 1-form, and $A$ a connection 1-form satisfying the Chern--Simons equation $F = \star J$. Let $\phi$ be a $V$-valued function. Then for $m \geq 1$, the following schematic commutation formula holds: \begin{equation} \label{eq:comm-covKG} \begin{aligned} {[\bfZ^{(m)}, {}^{(A)} \Box - 1]} \phi =& \sum_{k_{1}+k_{2} \leq m-1} \mathfrak{N}_{1}[{}^{(A)}\calL_{Z}^{(k_{1})} J, \bfZ^{(k_{2})} \phi] + \sum_{k_{1}+k_{2} \leq m-1} \mathfrak{N}_{2}[{}^{(A)}\calL_{Z}^{(k_{1})} J, \bfZ^{(k_{2})} \phi] \\ & + \sum_{k_{1}+k_{2} \leq m-1} \mathfrak{N}_{3}[{}^{(A)}\calL_{Z}^{(k_{1})} J, \bfZ^{(k_{2})} \phi] + \sum_{k_{1}+k_{2}+k_{3} \leq m-2} \mathfrak{N}_{4}[{}^{(A)}\calL_{Z}^{(k_{1})} J, {}^{(A)}\calL_{Z}^{(k_{2})} J, \bfZ^{(k_{3})} \phi] \end{aligned} \end{equation} where the last sum should be omitted in the case $m = 1$. \end{proposition} We remind the reader that by a schematic formula, we mean that the left-hand side equals a linear combination of terms on the right-hand side. This proposition is an immediate consequence of Lemma~\ref{lem:comm-covBox}, the definitions \eqref{eq:frkN-1}--\eqref{eq:frkN-4}, and the Lie algebra relation among $\set{Z_{\mu \nu}}$. Hence it is only left to establish Lemma~\ref{lem:comm-covBox}; this is done using the tools developed in Section~\ref{subsec:extr-calc-2}. \begin{proof} [Proof of Lemma~\ref{lem:comm-covBox}] We begin by establishing \eqref{eq:comm-covBox}. On $V$-valued functions, the differential operator $\bfZ$ is identical to ${}^{(A)}\calL_{Z}$. Using \eqref{eq:covBox} and the fact that $Z$ is a Killing vector field (hence ${}^{(A)}\calL_{Z}$ commutes with $\star$), the left-hand side of \eqref{eq:comm-covBox} is equal to \begin{equation*} - \star \Big( [{}^{(A)}\calL_Z, {}^{(A)}\ud] \star {}^{(A)}\ud \phi \Big) - \star {}^{(A)}\ud \star [ {}^{(A)}\calL_Z, {}^{(A)}\ud] \phi. \end{equation*} Using \eqref{eq:covLDcovud} and the Chern--Simons equation $F = \star J$, we may replace the commutator by $(\iota _Z \star J) \wedge$. Applying \eqref{eq:star-iX}, the preceding expression is then equal to \begin{align*} & \hskip-2em - \star \Big( \star (J \wedge Z^{\flat}) \wedge \star {}^{(A)}\ud \phi \Big) - \star {}^{(A)}\ud \star \Big( \star (J \wedge Z^{\flat}) \wedge \phi \Big) \\ & = - \star \Big( \star \star (J \wedge Z^{\flat}) \wedge {}^{(A)}\ud \phi \Big) - \star {}^{(A)}\ud \star \star (J \wedge Z^{\flat} \wedge \phi) \\ & = \star (J \wedge Z^{\flat} \wedge {}^{(A)}\ud \phi ) + \star {}^{(A)}\ud (J \wedge Z^{\flat} \wedge \phi), \end{align*} where we have used Lemma~\ref{lem:star-aux} and \eqref{eq:star-star}. The desired identity \eqref{eq:comm-covBox} now follows from Lemma~\ref{lem:leibniz-gV} and \eqref{eq:star-iX}. The identities \eqref{eq:comm-covBox-1}--\eqref{eq:comm-covBox-3} follow from routine computation, as in the preceding proof of \eqref{eq:comm-covBox}; hence we only sketch the proofs and leave the details to the reader. All these identities are proved by first replacing $\bfZ_{2}$ by ${}^{(A)}\calL_{Z_{2}}$, applying \eqref{eq:star-iX} and then using the Leibniz rule \eqref{eq:leibniz-LD}. Since $Z_{2}$ is Killing, note that ${}^{(A)}\calL_{Z_{2}}$ commutes with $\star$. We remark that the last terms in \eqref{eq:comm-covBox-1} and \eqref{eq:comm-covBox-2} arise from the commutator $[{}^{(A)}\calL_{Z_{2}}, {}^{(A)}\ud]$, \eqref{eq:covLDcovud}, the Chern--Simons equation $F = \star J$, and the definition \eqref{eq:star-gV}. \qedhere \end{proof} \subsection{Covariant Lie derivatives of $J_{\mathrm{CSH}}$} \label{subsec:comm-CSH} In Proposition~\ref{prop:comm-covKG}, the commutator between $\bfZ^{(m)}$ and ${}^{(A)} \Box - 1$ was computed in terms of $\bfZ^{(k)} \phi$ and the covariant Lie derivatives of $J$. In this subsection, we compute the latter in terms of $\varphi$ in the case of \eqref{eq:CSH}. We begin by defining the following $\mathfrak{g}$-valued differential form, which is bilinear (over $\mathbb R$) in the $V$-valued functions $\varphi^{1}, \varphi^{2}$: \begin{equation} \label{eq:Gmm-CSH-0} \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] = 2 \bbrk{\varphi^{1} \wedge {}^{(A)}\ud \varphi^{2}}, \end{equation} where $\bbrk{\cdot \wedge \cdot}$ has been defined in \eqref{eq:bbrk-def} and \eqref{eq:bbrk-wedge}. Recall that \[ J_{\mathrm{CSH}}(\varphi) = \brk{ \mathcal T \varphi, {}^{(A)}\ud \varphi} + \brk{ {}^{(A)}\ud \varphi, \mathcal T \varphi}.\] Hence, we may write \begin{equation*} J_{\mathrm{CSH}}(\varphi) = \Gamma_{\mathrm{CSH}}^{(0)}[\varphi, \varphi] = 2 \bbrk{\varphi \wedge {}^{(A)}\ud \varphi}. \end{equation*} We also define \begin{equation} \label{eq:Gmm-CSH-1} \Gamma_{\mathrm{CSH}}^{(1)}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}; Z] = 2 \bbrk{\varphi^{1} \wedge (\iota_{Z} \star \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{3}, \varphi^{4}] \wedge \varphi^{2})}, \end{equation} where each $\varphi^{j}$ is a $V$-valued function and $Z$ is a vector field. The relevance of $\Gamma_{\mathrm{CSH}}^{(1)}$ is made clear by the following lemma, in which the commutation relation of $\Gamma_{\mathrm{CSH}}^{(0)}$ is computed. \begin{lemma} \label{lem:comm-J-CSH-1} Let $A$ be a connection 1-form and $\varphi$ a $V$-valued function, which satisfy the Chern--Simons equation $F = \star J_{\mathrm{CSH}}(\varphi)$. Then for any $V$-valued functions $\varphi^{1}, \varphi^{2}$ and a Killing vector field $Z$, we have \begin{equation*} {}^{(A)}\calL_{Z} \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] = \Gamma_{\mathrm{CSH}}^{(0)}[\bfZ \varphi^{1}, \varphi^{2}] + \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \bfZ \varphi^{2}] + \Gamma_{\mathrm{CSH}}^{(1)}[\varphi^{1}, \varphi^{2}, \varphi, \varphi; Z] \end{equation*} \end{lemma} \begin{proof} This identity follows from the Leibniz rule \eqref{eq:leibniz-bbrk-LD} and \eqref{eq:covLDcovud}. \end{proof} Based on the observation that $\Gamma_{\mathrm{CSH}}^{(0)}$ occurs in $\Gamma_{\mathrm{CSH}}^{(1)}$, higher order covariant Lie derivative ${}^{(A)}\calL_{Z}^{(m)} J_{\mathrm{CSH}}(\varphi)$ can also be computed using Lemma~\ref{lem:comm-J-CSH-1} and the Leibniz rule \eqref{eq:leibniz-bbrk-LD}. We begin by making the following recursive definition of $\Gamma_{\mathrm{CSH}}^{(m)}$ for all integers $m \geq 1$: \begin{equation*} \begin{aligned} & \hskip-2em \Gamma_{\mathrm{CSH}}^{(m)}[\varphi^{1}, \varphi^{2}, \ldots, \varphi^{2m+2}; Z_{1}, Z_{2} \ldots, Z_{m}] \\ = & 2 \bbrk{\varphi^{1} \wedge (\iota_{Z_{1}} \star \Gamma_{\mathrm{CSH}}^{(m-1)}[\varphi^{3}, \ldots, \varphi^{2m+2}; Z_{2}, \ldots, Z_{m}] \wedge \varphi^{2})}. \end{aligned} \end{equation*} Here, each $\varphi^{j}$ is a $V$-valued function, and $Z_{j}$ is a vector field. Using this definition, it is not difficult to write down a formula for ${}^{(A)}\calL_{Z_{1}} \cdots {}^{(A)}\calL_{Z_{m}} J_{\mathrm{CSH}}(\varphi)$ for any $m$. However, the exact formula is rather long and unwieldy. As discussed earlier, we need not keep track of each $Z_{j}$, so we may simply write \begin{equation*} \Gamma_{\mathrm{CSH}}^{(m)}[\varphi^{1}, \varphi^{2}, \ldots, \varphi^{2m+2}] = \Gamma_{\mathrm{CSH}}^{(m)}[\varphi^{1}, \varphi^{2}, \ldots, \varphi^{2m+2}; Z_{1}, Z_{2} \ldots, Z_{m}], \end{equation*} with the understanding that each $Z_{j}$ is one of the Killing vector fields $Z_{\mu \nu}$. With this convention, we may write down the following compact schematic formula, which suffices for our use. \begin{proposition} \label{prop:comm-J-CSH} Let $A$ be a connection 1-form and $\varphi$ a $V$-valued function, which satisfy the Chern--Simons equation $F = \star J_{\mathrm{CSH}}(\varphi)$. Then for any $m \geq 1$, the following schematic formula holds: \begin{equation} \label{eq:comm-J-CSH} {}^{(A)}\calL^{(m)}_{Z} J_{\mathrm{CSH}}(\varphi) = \sum_{\ell=0}^{m} \Big( \sum_{k_{1} + \cdots k_{2\ell+2} \leq m - \ell} \Gamma_{\mathrm{CSH}}^{(\ell)}[\bfZ^{(k_{1})} \varphi, \cdots, \bfZ^{(k_{\ell}+2)} \varphi] \Big). \end{equation} \end{proposition} This proposition may be proved by induction on $m$, using the Leibniz rule \eqref{eq:leibniz-bbrk-LD}, the commutation formulae \eqref{eq:covLDiX} and \eqref{eq:covLDcovud}, and the Lie algebra relation among $\set{Z_{\mu \nu}}$. We omit the straightforward details. For specific values $m = 2, 3$, $\Gamma_{\mathrm{CSH}}^{(m)}$ takes the following form: \begin{align} \Gamma_{\mathrm{CSH}}^{(2)}[\varphi^{1}, \varphi^{2}, \ldots, \varphi^{6}] \label{eq:Gmm-CSH-2} = & 2^{2} \bbrk{\varphi^{1} \wedge (\bbrk{\varphi^{3} \wedge ((\iota_{Z} \star)^{2} \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{5}, \varphi^{6}] \wedge \varphi^{4}) } \wedge\varphi^{2})}, \\ \Gamma_{\mathrm{CSH}}^{(3)}[\varphi^{1}, \varphi^{2}, \ldots, \varphi^{8}] =& 2^{3} \bbrk{\varphi^{1} \wedge (\bbrk{\varphi^{3} \wedge (\bbrk{\varphi^{5} \wedge ((\iota_{Z} \star)^{3} \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{7}, \varphi^{8}] \wedge \varphi^{6})} \wedge \varphi^{4}) }\wedge \varphi^{2})}. \label{eq:Gmm-CSH-3} \end{align} These turn out to be only cases needed for our application in Section~\ref{sec:BA}. We end with computation of ${}^{(A)}\ud \Gamma_{\mathrm{CSH}}^{(0)}$ and ${}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)}$; it will be used in conjunction with Lemma~\ref{lem:d-i-star} and Proposition~\ref{prop:comm-J-CSH} to compute ${}^{(A)}\ud {}^{(A)}\calL_{Z}^{(m)} J_{\mathrm{CSH}}$. \begin{lemma} \label{lem:covud-covdlt-CSH} Let $A$ be a connection 1-form and $\varphi$ a $V$-valued function, which satisfy the Chern--Simons equation $F = \star J_{\mathrm{CSH}}(\varphi)$. Then for any $V$-valued functions $\varphi^{1}$ and $\varphi^{2}$, we have \begin{align} {}^{(A)}\ud \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] =& \bbrk{{}^{(A)}\ud \varphi^{1} \wedge {}^{(A)}\ud \varphi^{2}} + \bbrk{\varphi^{1} \wedge (\star \Gamma_{\mathrm{CSH}}^{(0)}[\varphi, \varphi] \wedge \varphi^{2})}, \label{eq:covud-CSH} \\ {}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] =& \star \bbrk{{}^{(A)}\ud \varphi^{1} \wedge \star {}^{(A)}\ud \varphi^{2}} - \bbrk{\varphi^{1}, ({}^{(A)} \Box - 1) \varphi^{2}} - \bbrk{\varphi^{1}, \varphi^{2}}. \label{eq:covdlt-CSH} \end{align} \end{lemma} \begin{remark} Observe that $\bbrk{{}^{(A)}\ud \varphi^{1} \wedge {}^{(A)}\ud \varphi^{2}}$ has a structure similar to the classical null form $Q_{\mu \nu}(f, g) = \partial_{\mu} f \partial_{\nu} g - \partial_{\nu} f \partial_{\mu} g$. Moreover, a further computation using the definition of $\star$ shows that \begin{equation*} \star \bbrk{{}^{(A)}\ud \varphi^{1} \wedge \star {}^{(A)}\ud \varphi^{2}} = - \eta^{\mu \nu} \bbrk{\bfT_{\mu} \varphi^{1}, \bfT_{\nu} \varphi^{2}}. \end{equation*} which resembles the classical null form $Q_{0}(f, g) = \partial^{\mu} f \partial_{\mu} g$. These structures do not play any role in the analysis that follows, due to the fast enough pointwise decay rate of the Klein--Gordon equation. However, they should be essential in the case of \emph{massless} Chern--Simons--Higgs equation. \end{remark} \begin{proof} Using \eqref{eq:leibniz-bbrk-d}, the Leibniz rule and \eqref{eq:covud-covud}, we have \begin{align*} {}^{(A)}\ud \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] =& \bbrk{{}^{(A)}\ud \varphi^{1} \wedge {}^{(A)}\ud \varphi^{2}} + \bbrk{\varphi^{1} \wedge (\star \Gamma_{\mathrm{CSH}}^{(0)}[\varphi, \varphi] \wedge \varphi^{2})}, \end{align*} which proves \eqref{eq:covud-CSH}. To prove \eqref{eq:covdlt-CSH}, we begin by writing out \begin{align*} {}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] =& \star {}^{(A)}\ud \star \bbrk{\varphi^{1} \wedge {}^{(A)}\ud \varphi^{2}}. \end{align*} Recalling the definition \eqref{eq:bbrk-wedge} of $\bbrk{\cdot \wedge \cdot}$, it is immediate that \begin{align*} \star \bbrk{\varphi \wedge v} = \bbrk{\varphi \wedge \star v} \end{align*} when $\varphi$ is a $V$-valued $0$-form (i.e., a $V$-valued function) and $v$ is a $V$-valued $k$-form. Using \eqref{eq:leibniz-bbrk-d}, the Leibniz rule and \eqref{eq:covBox}, we have \begin{align*} {}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)}[\varphi^{1}, \varphi^{2}] =& \star {}^{(A)}\ud \bbrk{\varphi^{1} \wedge \star {}^{(A)}\ud \varphi^{2}} \\ =& \star \bbrk{{}^{(A)}\ud \varphi^{1} \wedge \star {}^{(A)}\ud \varphi^{2}} + \star \bbrk{\varphi^{1} \wedge ({}^{(A)}\ud \star {}^{(A)}\ud \varphi^{2})} \\ =& \star \bbrk{{}^{(A)}\ud \varphi^{1} \wedge \star {}^{(A)}\ud \varphi^{2}} - \bbrk{\varphi^{1}, {}^{(A)} \Box \varphi^{2}}, \end{align*} which finishes the proof. \end{proof} \subsection{Covariant Lie derivatives of $J_{\mathrm{CSD}}$} \label{subsec:comm-CSD} Here we compute ${}^{(A)}\calL_{Z}^{(m)} J$ in the case of \eqref{eq:CSD}. We remind the reader that \[J_{\mathrm{CSD}} (\psi)= - \brk{i \alpha \mathcal T \psi, \psi},\] where $\alpha = \eta_{\mu \nu} \alpha^{\mu} \mathrm{d} x^{\nu}$ is a $2 \times 2$ matrix-valued 1-form with $ \alpha^{\mu} = \gamma^0 \gamma^{\mu}$. The matrix $\alpha^{\mu}$ acts on $\mathcal T \psi$ naturally, i.e., $\alpha^{\mu} \mathcal T \psi = \sum_{A} e_{A} \otimes \alpha^{\mu} \mathcal T^{A} v$ for any orthonormal basis $\set{e_{A}}$ of $\mathfrak{g}$. Since $\gamma^{0}$ is hermitian, $\gamma^{j}$ is anti-hermitian and $\gamma^{0} \gamma^{j} + \gamma^{j} \gamma^{0} = 0$ $(j=1,2)$, it follows that $\alpha^{\mu}$ is \emph{hermitian} for $\mu=0, 1, 2$. On the other hand, each $\mathcal T^{A}$ is \emph{anti-hermitian}. Finally, since $\alpha^{\mu}$ and $\mathcal T^{A}$ commute, we have $\alpha \mathcal T \psi = \mathcal T \alpha \psi$. Putting these observations together, we may write \begin{equation*} J_{\mathrm{CSD}} (\psi)= \frac{1}{2} \brk{\mathcal T \psi, i \alpha \psi} + \frac{1}{2} \brk{i \alpha \psi, \mathcal T \psi} = \bbrk{\psi \wedge i \alpha \psi}. \end{equation*} Motivated by the preceding computation, we define $\mathfrak{g}$-valued differential forms $\Gamma_{\mathrm{CSD}}^{(k)}$, which are multilinear (over $\mathbb R$) in the inputs: \begin{align*} \Gamma_{\mathrm{CSD}}^{(0)}[\psi^{1}, \psi^{2}] = & \bbrk{\psi^{1} \wedge i \alpha \psi^{2}} \\ \Gamma_{\mathrm{CSD}}^{(k)}[\psi^{1}, \psi^{2}; Z_{1}, \ldots, Z_{k}] = & \bbrk{\psi^{1} \wedge i (\mathcal L_{Z_{k}} \cdots \mathcal L_{Z_{1}}\alpha) \psi^{2}}. \end{align*} Here $\psi^{1}, \psi^{2}$ are $V = \Delta \otimes W$-valued functions, and each $Z_{j}$ is a vector field. By the above computation and definition, we have \begin{equation*} J_{\mathrm{CSD}}(\psi)= \Gamma_{\mathrm{CSD}}^{(0)}[\psi, \psi] = \bbrk{\psi \wedge i \alpha \psi}. \end{equation*} As in the case of \eqref{eq:CSH}, we often do not keep track of each $Z_{j}$ and simply write \begin{equation*} \Gamma_{\mathrm{CSD}}^{(k)}[\psi^{1}, \psi^{2}] = \Gamma_{\mathrm{CSD}}^{(k)}[\psi^{1}, \psi^{2}; Z_{1}, \ldots, Z_{k}], \end{equation*} where each $Z_{j}$ is understood to be one of the Killing vector fields $Z_{\mu \nu}$. The following analogue of Lemma~\ref{lem:comm-J-CSH-1} holds. \begin{lemma} \label{lem:comm-J-CSD-1} Let $A$ be a connection 1-form and $\psi$ a $V$-valued function, which satisfy the Chern--Simons equation $F = \star J_{\mathrm{CSD}}(\psi)$. Then for any $V$-valued functions $\psi^{1}, \psi^{2}$ and a Killing vector field $Z$, we have \begin{equation} \label{eq:comm-J-CSD-1} {}^{(A)}\calL_{Z} \Gamma_{\mathrm{CSD}}^{(0)}[\psi^{1}, \psi^{2}] = \Gamma_{\mathrm{CSD}}^{(0)}[\bfZ \psi^{1}, \psi^{2}] + \Gamma_{\mathrm{CSD}}^{(0)}[\psi^{1}, \bfZ \psi^{2}] + \Gamma_{\mathrm{CSD}}^{(1)}[\psi^{1}, \psi^{2}] \end{equation} \end{lemma} \begin{proof} This identity holds thanks to the Leibniz rule \eqref{eq:leibniz-bbrk-LD} and the formula ${}^{(A)}\calL_Z (\alpha \psi^2) = (\calL_Z \alpha) \psi^2 + \alpha {}^{(A)}\calL_{Z} \psi^2$. \qedhere \end{proof} As a simple consequence of the previous lemma, the following formula for ${}^{(A)}\calL_{Z}^{(m)} J_{CSD}(\psi)$ holds. \begin{proposition} \label{prop:comm-J-CSD} Let $A$ be a connection 1-form and $\psi$ a $V$-valued function, which satisfy the Chern--Simons equation $F = \star J_{\mathrm{CSD}}(\psi)$. Then for any $m \geq 1$, the following schematic formula holds. \begin{equation} \label{eq:comm-J-CSD} {}^{(A)}\calL_{Z}^{(m)} J_{\mathrm{CSD}}(\psi) = \sum_{\ell=0}^{m} \Big( \sum_{k_{1} + k_{2} \leq m-\ell} \Gamma_{\mathrm{CSD}}^{(\ell)}[\bfZ^{(k_{1})} \psi, \bfZ^{(k_{2})}\psi] \Big). \end{equation} \end{proposition} As in the case of \eqref{eq:CSH}, we end with a lemma that computes ${}^{(A)}\ud \Gamma_{\mathrm{CSH}}^{(k)}$. Combined with Proposition~\ref{prop:comm-J-CSD}, this lemma allows us to compute ${}^{(A)}\ud {}^{(A)}\calL^{(m)} J_{\mathrm{CSD}}$. \begin{lemma} \label{lem:covud-CSD} Let $A$ be a connection 1-form. Then for any $V$-valued functions $\psi^{1}$ and $\psi^{2}$, we have \begin{equation} \label{eq:covud-CSD} \begin{aligned} & \hskip-2em {}^{(A)}\ud \Gamma_{\mathrm{CSD}}^{(k)}[\psi^{1}, \psi^{2}; Z_{1}, \ldots, Z_{k}] \\ =& \bbrk{{}^{(A)}\ud \psi^{1} \wedge i (\mathcal L_{Z_{k}} \cdots \mathcal L_{Z_{1}} \alpha) \psi^{2}} - \bbrk{\psi^{1} \wedge (i (\mathcal L_{Z_{k}} \cdots \mathcal L_{Z_{1}} \alpha) \wedge {}^{(A)}\ud \psi^{2})}. \end{aligned} \end{equation} \end{lemma} We omit the proof, which is a straightforward application of \eqref{eq:leibniz-bbrk-d} and the Leibniz rule, combined with the fact that $\mathrm{d} \alpha = 0$. \subsection{Commutation relation for $U(\phi)$} \label{subsec:comm-U} In this subsection, we establish the commutation properties of the $V$-valued potential $U(\phi)$. We begin with the case of \eqref{eq:CSH}. By \eqref{eq:CSH-ptnl} and the convention $v = \kappa = 1$, $U_{\mathrm{CSH}}$ can be decomposed into \begin{equation*} U_{\mathrm{CSH}}(\varphi) = U_{3}[\varphi, \varphi, \varphi] + U_{5}[\varphi, \varphi, \varphi, \varphi, \varphi] \end{equation*} where \begin{align*} U_{3}[\varphi^{1}, \varphi^{2}, \varphi^{3}] = & \delta_{AA'} \brk{\mathcal T^{A} \varphi^{1}, \varphi^{2}} \mathcal T^{A'} \varphi^{3}, \\ U_{5}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, \varphi^{5}] = & \delta_{AA'} \delta_{BB'} \Big( \brk{\mathcal T^{A} \varphi^{1}, \varphi^{2}} \brk{(\mathcal T^{A'} \mathcal T^{B'} + \mathcal T^{B'} \mathcal T^{A'}) \varphi^{3}, \varphi^{4}} \mathcal T^{B} \varphi^{5} \\ & \phantom{\delta_{AA'} \delta_{BB'} \Big(} + \brk{\mathcal T^{A} \varphi^{1}, \varphi^{2}} \brk{\mathcal T^{B} \varphi^{3}, \varphi^{4}} \mathcal T^{A'} \mathcal T^{B'} \varphi^{5} \Big). \end{align*} \begin{lemma} \label{lem:comm-U-CSH} Let $X$ be any vector field, and $\varphi^{1}, \ldots, \varphi^{5}$ be $V$-valued functions. The multilinear forms $U_{3}$ and $U_{5}$ obey the following Leibniz rules. \begin{align} {}^{(A)}\bfD_{X} U_{3}[\varphi^{1}, \varphi^{2}, \varphi^{3}] =& U_{3}[{}^{(A)}\bfD_{X} \varphi^{1}, \varphi^{2}, \varphi^{3}] + \cdots + U_{3}[\varphi^{1}, \varphi^{2}, {}^{(A)}\bfD_{X} \varphi^{3}] \label{eq:comm-U-CSH-3} \\ {}^{(A)}\bfD_{X} U_{5}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, \varphi^{5}] =& U_{5}[{}^{(A)}\bfD_{X} \varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, \varphi^{5}] + \cdots \label{eq:comm-U-CSH-5} \\ & + U_{5}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, {}^{(A)}\bfD_{X} \varphi^{5}]. \notag \end{align} \end{lemma} \begin{proof} The idea of the proof is similar to that of Lemma~\ref{lem:leibniz-bbrk}. To exemplify, we will show that \begin{equation*} \tilde{U}_{5}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, \varphi^{5}] = \delta_{AA'} \delta_{BB'} \brk{\mathcal T^{A} \varphi^{1}, \varphi^{2}} \brk{\mathcal T^{A'} \mathcal T^{B'} \varphi^{3}, \varphi^{4}} \mathcal T^{B} \varphi^{5}, \end{equation*} which is a part of the quintilinear form $U_{5}$, obeys the Leibniz rule \begin{equation} \label{eq:comm-U-CSH-ex} {}^{(A)}\bfD_{X} \tilde{U}_{5}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, \varphi^{5}] =\tilde{U}_{5}[{}^{(A)}\bfD_{X} \varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, \varphi^{5}] + \cdots + \tilde{U}_{5}[\varphi^{1}, \varphi^{2}, \varphi^{3}, \varphi^{4}, {}^{(A)}\bfD_{X} \varphi^{5}]. \end{equation} The desired identities \eqref{eq:comm-U-CSH-3} and \eqref{eq:comm-U-CSH-5} may be proved by a similar argument. As in the proof of Lemma~\ref{lem:leibniz-bbrk}, we introduce the shorthand $a = A(X)$, and fix an orthonormal basis $\set{e_{A}}$ of $\mathfrak{g}$ so that $\mathcal T^{A} \varphi = \delta^{AA'} e_{A'} \cdot \varphi$. We define the structure constants $c_{AB}^{C}$ by $\LieBr{e_{A}} {e_{B}} = c_{AB}^{C} e_{C}$. The difference between the left- and right-hand sides of \eqref{eq:comm-U-CSH-ex} can then be computed as follows: \begin{align*} & a^{C} c_{CA}^{D} \delta^{A A'} \delta^{B B'} \brk{e_{D} \cdot \varphi^{1}, \varphi^{2}} \brk{e_{A'} \cdot (e_{B'} \cdot \varphi^{3}), \varphi^{4}} e_{B} \cdot \varphi^{5} \\ & + a^{C} c_{CA'}^{D} \delta^{A A'} \delta^{B B'} \brk{e_{A} \cdot \varphi^{1}, \varphi^{2}} \brk{e_{D} \cdot (e_{B'} \cdot \varphi^{3}), \varphi^{4}} e_{B} \cdot \varphi^{5} \\ & + a^{C} c_{CB'}^{D} \delta^{A A'} \delta^{B B'} \brk{e_{A} \cdot \varphi^{1}, \varphi^{2}} \brk{e_{A'} \cdot (e_{D} \cdot \varphi^{3}), \varphi^{4}} e_{B} \cdot \varphi^{5} \\ & + a^{C} c_{CB}^{D} \delta^{A A'} \delta^{B B'} \brk{e_{A} \cdot \varphi^{1}, \varphi^{2}} \brk{e_{A'} \cdot (e_{B'} \cdot \varphi^{3}), \varphi^{4}} e_{D} \cdot \varphi^{5} \end{align*} Relabeling the indices, we see that this expression vanishes, and hence \eqref{eq:comm-U-CSH-ex} follows, if \begin{equation*} c_{CD}^{A} \delta^{DA'} + c_{CD}^{A'} \delta^{A D} = 0, \quad c_{CD}^{B'} \delta^{BD} + c_{CD}^{B} \delta^{DB'} = 0. \end{equation*} However, these are precisely \eqref{eq:str-const}. \qedhere \end{proof} Next we turn to the case of \eqref{eq:CSD}, where $V = \Delta \otimes W$. Recall the definition of $U_{\mathrm{CSD}}(\psi)$ given in \eqref{eq:U-CSD}. Using the notation $\bbrk{\cdot, \cdot}$, the $V$-valued potential $U_{\mathrm{CSD}}(\psi)$ may be rewritten as \begin{equation*} U_{\mathrm{CSD}} (\psi) = \tilde{U}_{3}[\psi, \psi, \psi], \end{equation*} where \begin{equation*} \tilde{U}_{3}[\psi^{1}, \psi^{2}, \psi^{3}] = \frac{1}{2} \epsilon(T_{\mu}, T_{\nu}, T_{\lambda}) \gamma^{\mu} \gamma^{\nu} \bbrk{\psi^{1}, i \alpha^{\lambda} \psi^{2}} \psi^{3}. \end{equation*} \begin{lemma} \label{lem:comm-U-CSD} Let $X$ be any vector field, and $\psi^{1}, \ldots, \psi^{3}$ be $V$-valued functions. The multilinear form $\tilde{U}_{3}$ obeys the following Leibniz rule. \begin{align} {}^{(A)}\bfD_{X} \tilde{U}_{3}[\psi^{1}, \psi^{2}, \psi^{3}] =& \tilde{U}_{3}[{}^{(A)}\bfD_{X} \psi^{1}, \psi^{2}, \psi^{3}] + \cdots + \tilde{U}_{3}[\psi^{1}, \psi^{2}, {}^{(A)}\bfD_{X} \psi^{3}]. \label{eq:comm-U-CSD} \end{align} \end{lemma} \begin{proof} Since $\epsilon(T_{\mu}, T_{\nu}, T_{\lambda})$, $\gamma^{\mu}$ and $\alpha^{\mu}$ are constant, the desired conclusion follows from Lemmas~\ref{lem:leibniz-gV} and \ref{lem:leibniz-bbrk}. \qedhere \end{proof} \subsection{Pointwise bounds} \label{subsec:ptwise} Let $\Omega$ be a $V$-, $\mathfrak{g}$- or real-valued differential form. Recall that the norm $\abs{\Omega}$ is defined by the formula \begin{align*} \abs{\Omega}^{2} = \sum_{\mu_{1} < \cdots < \mu_{k}} \abs{\Omega(T_{\mu_{1}}, \ldots, T_{\mu_{k}})}^{2}. \end{align*} where we use $\brk{\cdot, \cdot}$ [resp. $\brk{\cdot, \cdot}_{\mathfrak{g}}$] on the right-hand side when $\Omega$ is $V$- [resp. $\mathfrak{g}$-] valued. The following bounds are obvious, yet useful: \begin{equation} \label{eq:ptwise-easy} \begin{aligned} \abs{\star \Omega} \leq & \abs{\Omega}, \\ \abs{\iota_{T_{\mu}} \Omega} \leq & \abs{\Omega}, \\ \abs{\iota_{Z_{\mu \nu}} \Omega} \leq &\tau \cosh y \abs{\Omega} ,\\ \abs{\iota_{S} \Omega} \leq & \tau \cosh y \abs{\Omega}, \\ \abs{\iota_{N} \Omega} \leq & \cosh y \abs{\Omega}. \end{aligned} \end{equation} Let $\Omega^{1}, \Omega^{2}$ be $V$-, $\mathfrak{g}$- or real-valued forms for which the wedge product $\Omega^{1} \wedge \Omega^{2}$ can be defined as in Section~\ref{subsec:extr-calc-2} (e.g., $\Omega^{1}$ is $\mathfrak{g}$-valued and $\Omega^{2}$ is $V$-valued). Then we have the inequality \begin{align*} \abs{\Omega^{1} \wedge \Omega^{2}} \leq C \abs{\Omega^{1}} \abs{\Omega^{2}}. \end{align*} Similarly, for a $\mathfrak{g}$-valued form $a$ and a $V$-valued form $v$, we have \begin{align*} \abs{\bbrk{a \wedge v}} \leq C \abs{a} \abs{v}. \end{align*} Recall the multilinear expressions $\mathfrak{N}_{1}, \ldots, \mathfrak{N}_{4}$, which were defined in Section~\ref{subsec:comm-covBox} to facilitate the computation of $[\bfZ^{(m)}, {}^{(A)} \Box - 1]$. The following pointwise bounds hold for these expressions. \begin{lemma} \label{lem:ptwise-N} Let $\Gamma$, $\Gamma^{j}$ $(j=1,2)$ be $\mathfrak{g}$-valued 1-forms and $\phi$ be a $V$-valued function. Then we have \begin{align} \abs{\mathfrak{N}_{1}[\Gamma, \phi]} \leq & C \tau \abs{\iota_{N} {}^{(A)}\ud \Gamma} \abs{\phi} \label{eq:ptwise-N1} \\ \leq & C \tau \cosh y \abs{{}^{(A)}\ud \Gamma} \abs{\phi}, \label{eq:ptwise-N1-easy} \\ \abs{\mathfrak{N}_{2}[\Gamma, \phi]} \leq & C \tau \Big( \abs{\Gamma} \abs{\bfN \phi} + \abs{\iota_{N} \Gamma} \abs{\bfT \phi} \Big)\label{eq:ptwise-N2} \\ \leq & C \tau \cosh y \abs{\Gamma} \abs{\bfT \phi}, \label{eq:ptwise-N2-easy} \\ \abs{\mathfrak{N}_{3}[\Gamma, \phi]} \leq & C \abs{\Gamma} \abs{\phi}, \label{eq:ptwise-N3}\\ \abs{\mathfrak{N}_{4}[\Gamma^{1}, \Gamma^{2}, \phi]} \leq & C \tau^{2} \cosh^{2} y \abs{\Gamma^{1}} \abs{\Gamma^{2}} \abs{\phi}. \label{eq:ptwise-N4} \end{align} \end{lemma} \begin{proof} Recalling the definitions \eqref{eq:frkN-1} and \eqref{eq:frkN-2} of $\mathfrak{N}_{1}$ and $\mathfrak{N}_{2}$, we see that we need to bound $\abs{\iota_{Z} \star \Omega}$ where $\Omega = {}^{(A)}\ud \Gamma \wedge \phi$ or $\Gamma \wedge {}^{(A)}\ud \phi$. In rectilinear coordinates, it can be checked that \begin{equation*} x_{\mu} \epsilon_{\nu \kappa \lambda} - x_{\nu} \epsilon_{\mu \kappa \lambda} = \epsilon_{\mu \nu \kappa} x_{\lambda} - \epsilon_{\mu \nu \lambda} x_{\kappa}. \end{equation*} Let $\Omega$ be a ($V$-, $\mathfrak{g}$- or real-valued) 2-form $\Omega$. By the preceding identity, we have \begin{equation*} \iota_{Z_{\mu \nu}} \star \Omega = 2 \tau \sum_{\kappa} \epsilon_{\mu \nu \kappa} \iota_{N} \iota_{T_{\kappa}} \Omega. \end{equation*} Recalling the definition of $\abs{\Omega}$, we see that \begin{equation*} \iota_{Z_{\mu \nu}} \star \Omega \leq C \tau \abs{\iota_{N} \Omega}. \end{equation*} The bounds \eqref{eq:ptwise-N1} and \eqref{eq:ptwise-N2} now follow, using the Leibniz rule for $\iota_{N}$. The inequalities \eqref{eq:ptwise-N1-easy} and \eqref{eq:ptwise-N2-easy} are immediate consequences of \eqref{eq:ptwise-easy}. Moreover, \eqref{eq:ptwise-N3} and \eqref{eq:ptwise-N4} are straightforward to establish; hence we omit their proofs. \qedhere \end{proof} For $\Gamma_{\mathrm{CSH}}^{(k)}$ [resp. $\Gamma_{\mathrm{CSD}}^{(k)}$], which was defined in Section~\ref{subsec:comm-CSH} and arise in the computation of ${}^{(A)}\calL_{Z}^{(m)} J_{\mathrm{CSH}}$ [resp. ${}^{(A)}\calL_{Z}^{(m)} J_{\mathrm{CSD}}$], the following pointwise bounds hold. \begin{lemma} \label{lem:ptwise-CSH} Let $\phi^{j}$ $(j=1,2,\ldots)$ be $V$-valued functions. Then for any integer $k = 0, 1, \ldots$, we have \begin{align} \abs{\Gamma_{\mathrm{CSH}}^{(k)}[\phi^{1}, \phi^{2}, \ldots, \phi^{2k+2}]} \leq & C_{k} (\tau \cosh y)^{k} \abs{\phi^{1}} \cdots \abs{\phi^{2k+1}} \abs{\bfT \phi^{2k+2}}. \label{eq:ptwise-CSH} \end{align} In the case $k = 0$, for any vector $X$ we have the following refined bound: \begin{equation} \label{eq:ptwise-CSH:X} \abs{\iota_{X} \Gamma_{\mathrm{CSH}}^{(0)}[\phi^{1}, \phi^{2}]} \leq C \abs{\phi^{1}} \abs{{}^{(A)}\bfD_{X} \phi^{2}}. \end{equation} \end{lemma} \begin{lemma} \label{lem:ptwise-CSD} Let $\phi^{1}, \phi^{2}$ be $V$-valued functions. Then for any integer $k = 0, 1, \ldots$, we have \begin{align} \abs{\Gamma_{\mathrm{CSD}}^{(k)}[\phi^{1}, \phi^{2}]} \leq & C_{k} \abs{\phi^{1}} \abs{\phi^{2}}. \label{eq:ptwise-CSD} \end{align} \end{lemma} Finally, the following pointwise bounds for the multilinear forms $U_{3}$, $U_{5}$ and $\tilde{U}_{3}$ (defined in Section~\ref{subsec:comm-U}) hold. \begin{lemma} \label{lem:ptwise-U} Let $\phi^{j}$ $(j=1, \ldots, 5)$ be $V$-valued functions. Then we have \begin{align} \abs{U_{3}[\phi^{1}, \phi^{2}, \phi^{3}]} \leq & C \abs{\phi^{1}} \abs{\phi^{2}} \abs{\phi^{3}}, \label{eq:ptwise-U-CSH-3} \\ \abs{U_{5}[\phi^{1}, \phi^{2}, \phi^{3}, \phi^{4}, \phi^{5}]} \leq & C \abs{\phi^{1}} \abs{\phi^{2}} \abs{\phi^{3}} \abs{\phi^{4}} \abs{\phi^{5}}, \label{eq:ptwise-U-CSH-5} \\ \abs{\tilde{U}_{3}[\phi^{1}, \phi^{2}, \phi^{3}]} \leq & C \abs{\phi^{1}} \abs{\phi^{2}} \abs{\phi^{3}}. \label{eq:ptwise-U-CSD} \end{align} \end{lemma} We omit the straightforward proofs of the preceding lemmas. \section{Proof of the main a priori estimates} \label{sec:BA} In this section, we carry out the proof of the main a priori estimates (Proposition~\ref{prop:main}). In Section~\ref{subsec:BAs}, we reduce the proof of Proposition~\ref{prop:main} to a bootstrap argument. In particular, we list the bootstrap assumptions, and introduce a few conventions that will simplify the further presentation. In the remainder of the section, we show that the bootstrap assumptions can be improved provided that $\delta_{\star}(R)$ in the hypothesis of Proposition~\ref{prop:main} is chosen sufficiently small. \subsection{Reduction to a bootstrap argument} \label{subsec:BAs} Throughout this section, we assume that $(A, \phi)$ is a solution to \eqref{eq:CS-uni} satisfying the hypotheses of Proposition~\ref{prop:main}, where $\delta_{\ast} = \delta_{\ast}(R)$ is to be specified below. We begin with a bound on the initial hyperboloid $\mathcal H_{2R}$, which is the starting point of the proof of Proposition~\ref{prop:main}. \begin{lemma} \label{lem:BA:ini} If $\epsilon$ is sufficiently small depending on $R$, then we have \begin{equation} \label{eq:BA:ini} \sum_{k =0}^{4} \Big( \int_{\mathcal H_{2R}} \bfe_{\mathcal H_{2R}}[\bfZ^{(m)} \phi] \, \mathrm{d} \sgm_{\mathcal H_{2R}}\Big)^{1/2} \leq C(R) \epsilon, \end{equation} where the energy density $\bfe_{\mathcal H_{2R}}$ was defined in \eqref{eq:ed}. \end{lemma} This lemma is a consequence of \eqref{eq:initial-hyp:est}, Lemma~\ref{lem:ini-en} and the formula for the commutator $[\bfZ^{(m)}, {}^{(A)} \Box - 1]$ derived in Section~\ref{sec:comm}. Observe that $\abs{\bfZ^{(m)} \phi} \leq C(m, R) \sum_{k=0}^{m} \abs{\bfT^{(k)} \phi}$ in the region $\mathcal R_{t=2R}^{\tau = 2R}$ defined in \eqref{eq:ini-en:region}, thanks to the support property \eqref{eq:initial-hyp:fsp}. This observation allows us to use \eqref{eq:initial-hyp:est} (combined with the Sobolev inequality) to bound the error terms in Lemma~\ref{lem:ini-en}. We omit further details. Next, we turn to the statement of the central bootstrap assumptions. In order to proceed, we introduce the following notation: By $\tau^{\alpha +}$ [resp. $\tau^{\alpha-}$] for some $\alpha \in \mathbb R$, we mean $\tau^{\alpha + \delta}$ [resp. $\tau^{\alpha - \delta}$] for a fixed absolute constant $0 < \delta \ll 1$. For $2R \leq \tau \leq T'$ (where $T' \leq T$), the following \emph{bootstrap assumptions} will be made: \begin{itemize} \item {\bf $L^{2}$ bounds with growth.} For $0 \leq m \leq 4$, \begin{equation} \label{eq:BA:L2} \wnrm{\cosh y \bfZ^{(m)} \phi}_{L^{2}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{2}_{\tau}} \leq 10 \epsilon_{1} \log^{m} (1+\tau) . \end{equation} For $0 \leq m \leq 3$, \begin{equation} \label{eq:BA:L2:S} \wnrm{\cosh y \bfZ^{(m)} \bfN \phi}_{L^{2}_{\tau}} \leq 10 \epsilon_{1} \log^{m+1} (1+\tau) . \end{equation} \item {\bf Sharp $L^{\infty}$ decay.} \begin{equation} \label{eq:BA:Linfty} \wnrm{\cosh y \phi}_{L^{\infty}_{\tau}} + \wnrm{\cosh y \bfN \phi}_{L^{\infty}_{\tau}} + \wnrm{\bfT \phi}_{L^{\infty}_{\tau}} \leq 10 \epsilon_{1} \tau^{-1}. \end{equation} \item {\bf Nonlinearity estimates.} For $0 \leq m \leq 4$, \begin{equation} \label{eq:BA:KG} \wnrm{\cosh y ({}^{(A)} \Box - 1) \bfZ^{(m)} \phi}_{L^{2}_{\tau}} \leq 10 \epsilon_{1}^{3-} \tau^{-1} \log^{m-1} (1+\tau). \end{equation} \end{itemize} Here, $\epsilon_{1} = B_{0} \epsilon$ and $B_{0}$ is a large absolute constant to be chosen later; recall from the hypothesis of Proposition~\ref{prop:main} that $\epsilon_{1} = B_{0} \epsilon \leq B_{0} \delta_{\ast}$. From Lemma~\ref{lem:BA:ini} and a simple computation involving the Klainerman--Sobolev inequality (Proposition~\ref{prop:KlSob}) and the formula for $[\bfZ^{(m)}, {}^{(A)} \Box - 1]$ in Section~\ref{subsec:comm-covBox}, it follows that \eqref{eq:BA:L2}--\eqref{eq:BA:KG} hold on the initial hyperboloid $\tau = 2R$ \emph{without} the factor of 10 on the right-hand side if $B_{0}$ is chosen sufficiently large. In the rest of Section~\ref{sec:BA}, our goal is to show that if $B_{0}$ is large enough and $\delta_{\ast}$ is sufficiently small, then the above bootstrap assumptions may be improved, in the sense that \eqref{eq:BA:L2}--\eqref{eq:BA:KG} hold for $\tau \in [2R, T']$ \emph{without} the factor of 10 on the right-hand side. By a routine continuity argument in $T'$, we may conclude that \eqref{eq:BA:L2}--\eqref{eq:BA:KG} hold for all $\tau \in [2R, T]$; Proposition~\ref{prop:main} would then follow. We end with a few conventions that will be in effect for the rest of this section. First, in view of the fact that $\delta_{\ast}$ would be chosen very small at the end, {\bf we assume that $0 < \epsilon_{1} \leq 1$}. Second, unless otherwise stated, {\bf all the estimates are for $\tau \in [2R, T']$.} Finally, since $R$ is fixed, {\bf we suppress specifying dependence of constants on $R$.} \subsection{Consequences of the bootstrap assumptions} \label{subsec:BA-conseq} Henceforth, our goal is to improve the bootstrap assumptions \eqref{eq:BA:L2}--\eqref{eq:BA:KG}. We begin by deriving some quick consequences of the bootstrap assumptions in Section~\ref{subsec:BAs}. We start with some decay estimates for $\bfZ^{(m)} \phi$ and $\bfZ^{(m)} \bfN \phi$, which follow from the Klainerman--Sobolev inequality (Proposition~\ref{prop:KlSob}). \begin{lemma} \label{lem:BA:KlSob} Suppose that $(A, \phi)$ satisfies the bootstrap assumptions \eqref{eq:BA:L2} and \eqref{eq:BA:L2:S}. Then for $0 \leq m \leq 2$, we have \begin{equation} \label{eq:weakLinfty} \wnrm{\cosh y \bfZ^{(m)} \phi}_{L^{\infty}_{\tau}} + \wnrm{\cosh y \bfZ^{(m-1)} \bfN \phi}_{L^{\infty}_{\tau}} \leq C \epsilon_{1} \, \tau^{-1} \log^{m+2} (1+\tau). \end{equation} where the last term on the left-hand side should be omitted in the case $m = 0$. For $m = 3$, we have \begin{equation} \label{eq:weakLp} \wnrm{\cosh y \bfZ^{(3)} \phi}_{L^{4}_{\tau}} + \wnrm{\cosh y \bfZ^{(2)} \bfN \phi}_{L^{4}_{\tau}} \leq C \epsilon_{1} \, \tau^{-1+\frac{2}{p}} \log^{4} (1+\tau). \end{equation} \end{lemma} \begin{proof} The inequality \eqref{eq:weakLinfty} follows from \eqref{eq:BA:L2}, \eqref{eq:BA:L2:S} and the Klainerman--Sobolev inequality (Proposition~\ref{prop:KlSob}). Then \eqref{eq:weakLp} follows by application of Lemma~\ref{lem:Z}. \qedhere \end{proof} Our argument below requires bounds for $\bfN \bfZ^{(m)} \phi$. The following estimate for the commutator $[\bfZ^{(m)}, \bfN] \phi$ may be used to show that such estimates follow from the corresponding bounds for $\bfZ^{(m)} \bfN \phi$. \begin{lemma} \label{lem:BA:comm-NZ} Suppose that $(A, \phi)$ satisfies the bootstrap assumptions \eqref{eq:BA:L2} and \eqref{eq:BA:L2:S}. Then for $1 \leq m \leq 3$ and $1 \leq p \leq \infty$, we have \begin{equation} \label{eq:comm-NZ} \begin{aligned} & \hskip-2em \wnrm{\cosh y [\bfZ^{(m)}, \bfN] \phi}_{L^{p}_{\tau}} \\ \leq & C \epsilon_{1}^{2} \tau^{-1+} \sum_{k \leq m-1} \Big( \wnrm{\cosh y \bfZ^{(k)} \phi}_{L^{p}_{\tau}} + \wnrm{\cosh y \bfN \bfZ^{(k)} \phi}_{L^{p}_{\tau}} + \wnrm{\bfT \bfZ^{(k)} \phi}_{L^{p}_{\tau}} \Big). \end{aligned} \end{equation} \end{lemma} In order to prove Lemma~\ref{lem:BA:comm-NZ}, it is convenient to establish certain bounds for $\bfN \bfZ^{(m)} \phi$ and $\bfT \bfZ^{(m)} \phi$ simultaneously, since these expressions arise in the commutator $[\bfN, \bfZ^{(m)}] \phi$. We record these bounds as a lemma: \begin{lemma} \label{lem:BA:NZ} Suppose that $(A, \phi)$ satisfies the bootstrap assumptions \eqref{eq:BA:L2} and \eqref{eq:BA:L2:S}. For $0 \leq m \leq 3$, we have \begin{equation} \label{eq:NZ-L2} \wnrm{\cosh y \bfN \bfZ^{(m)} \phi}_{L^{2}_{\tau}} \leq C \epsilon_{1} \log^{m}(1+\tau). \end{equation} Moreover, the following $L^{p}$ estimates also hold: \begin{align} \wnrm{\cosh y \bfN \bfZ^{(m)} \phi}_{L^{\infty}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{\infty}_{\tau}} \leq & C \epsilon_{1} \tau^{-1} \log^{m+2}(1+\tau) \quad \hbox{ for } 0 \leq m \leq 1, \label{eq:NZ-Linfty} \\ \wnrm{\cosh y \bfN \bfZ^{(m)} \phi}_{L^{4}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{4}_{\tau}} \leq & C \epsilon_{1} \tau^{-\frac{1}{2}} \log^{4}(1+\tau) \quad \hbox{ for } m = 2. \label{eq:NZ-L4} \end{align} \end{lemma} \begin{proof} [Proof of Lemmas~\ref{lem:BA:comm-NZ} and \ref{lem:BA:NZ}] We begin with a simple observation: Once \eqref{eq:comm-NZ} is established in the range $1 \leq m \leq m_{0}$ for some $m_{0} \leq 3$, then \eqref{eq:NZ-L2}, \eqref{eq:NZ-Linfty} and \eqref{eq:NZ-L4} for the same range of $m$ follow. Indeed, proceeding inductively, we may assume that \eqref{eq:comm-NZ}, \eqref{eq:NZ-L2}, \eqref{eq:NZ-Linfty} and \eqref{eq:NZ-L4} hold for up to some $m-1$, where $0 \leq m \leq m_{0}$ (in the case $m = 0$, we make no induction hypothesis). Then the claim for $m$ is obvious for \eqref{eq:NZ-L2} and the term involving $\bfN \bfZ^{(m)} \phi$ in \eqref{eq:NZ-Linfty} and \eqref{eq:NZ-L4}. To bound $\bfT \bfZ^{(m)}$, we use the pointwise inequality \begin{equation} \label{eq:T-N-Z} \abs{\bfT \tilde{\phi}} \leq C \cosh y ( \abs{\bfN \tilde{\phi}} + \tau^{-1} \abs{\bfZ \tilde{\phi}}) \end{equation} with $\tilde{\phi} = \bfZ^{(m)} \phi$. By the preceding discussion, the first term on the right-hand side is acceptable. On the other hand, the last term obeys (thanks to Lemma~\ref{lem:BA:KlSob}) a far better decay rate than needed: \begin{equation} \label{eq:Z-lower-order} \wnrm{\cosh y \tau^{-1} \bfZ^{(m+1)} \phi}_{L^{p}_{\tau}} \leq C \epsilon_{1} \tau^{-2 + \frac{2}{p}} \log^{4}(1+\tau) , \end{equation} for any $1 \leq p \leq \infty$ when $0 \leq m \leq 1$ and $2 \leq p \leq 4$ when $m = 2$. Consequently, not only would Lemma~\ref{lem:BA:NZ} follow once we prove \eqref{eq:comm-NZ} for all $m$, but we are also allowed to employ the bounds in Lemma~\ref{lem:BA:NZ} for $m$ for which \eqref{eq:comm-NZ} has already been proved. As discussed above, this will be useful because expressions of the form $\bfN \bfZ^{(k)} \phi$ and $\bfT \bfZ^{(k)} \phi$ with $k \leq m-1$ appear in the commutator $[\bfN, \bfZ^{(m)}] \phi$; see, in particular, the case of \eqref{eq:CSH} below. Our next task is to compute $[\bfN, \bfZ^{(m)}] \phi$. The commutator between $\bfZ_{\mu \nu}$ and $\bfS = \tau \bfN$ is \begin{align*} [\bfS, \bfZ_{\mu \nu}] \phi =& - (\iota_{S} \iota_{Z_{\mu \nu}} \star J )\phi, \end{align*} since $S$ and $Z_{\mu \nu}$ commute. From this computation, \eqref{eq:covLDiX}, \eqref{eq:star-LDZ} and the Lie algebra relations among $\set{S, Z_{\mu \nu}}$, we may derive the following schematic commutation formula for $[\bfZ^{(m)}, \bfN] \phi$: \begin{align*} [\bfN, \bfZ^{(m)}] \phi =& \tau^{-1} \sum_{k_{1} + k_{2} \leq m-1} (\iota_{S} \iota_{Z} \star {}^{(A)}\calL_{Z}^{(k_{1})} J) \bfZ^{(k_{2})} \phi \quad \hbox{ for } m \geq 1. \end{align*} Given a ($V$-, $\mathfrak{g}$- or real-valued) $1$-form $\Gamma$, we compute using the rectilinear coordinates $(t = x^{0}, x^{1}, x^{2})$ \begin{align*} \iota_{S} \iota_{Z_{\mu \nu}} (\star \Gamma) =& (\eta^{-1})^{\alpha \beta} (x^{\kappa} x_{\mu} \epsilon_{\nu \kappa \alpha} - x^{\kappa} x_{\nu} \epsilon_{\mu \kappa \alpha}) \Gamma(T_{\beta}) \\ =& (\eta^{-1})^{\alpha \beta} \epsilon_{\mu \nu \kappa} x^{\kappa} x_{\alpha} \Gamma(T_{\beta}) - (\eta^{-1})^{\alpha \beta} \epsilon_{\mu \nu \alpha} x_{\kappa} x^{\kappa} \Gamma(T_{\beta}). \end{align*} Observe furthermore that $(\eta^{-1})^{\alpha \beta} x_{\alpha} T_{\beta} = x^{\beta} T_{\beta = } \tau N$ and $x_{\kappa} x^{\kappa} = - \tau^{2}$. Hence \begin{equation} \abs{\iota_{S} \iota_{Z} \star \Gamma} \leq C ( \tau^{2} \cosh y \abs{\iota_{N} \Gamma} + \tau^{2} \abs{\Gamma}). \end{equation} We therefore arrive at the pointwise bound \begin{equation} \label{eq:key-comm-NZ} \begin{aligned} \cosh y \abs{[\bfN, \bfZ^{(m)}] \phi} \leq C \tau \cosh y \sum_{k_{1} + k_{2} \leq m-1} \Big(\cosh y \abs{\iota_{N}{}^{(A)}\calL_{Z}^{(k_{1})} J} + \abs{{}^{(A)}\calL_{Z}^{(k_{1})} J} \Big) \abs{\bfZ^{(k_{2})} \phi}. \end{aligned} \end{equation} In order to proceed, we divide into two cases: $J = J_{\mathrm{CSH}}$ and $J = J_{\mathrm{CSD}}$. \paragraph*{\bfseries - Case 1: $J = J_{\mathrm{CSH}}$} When $m = 1$, by \eqref{eq:ptwise-CSH:X} and \eqref{eq:key-comm-NZ}, we may estimate \begin{align*} \cosh y \abs{[\bfN, \bfZ] \phi} \leq & C \tau \cosh y \Big( \abs{\iota_{N} \Gamma_{\mathrm{CSH}}^{(0)}[\phi, \phi]} + \abs{\Gamma_{\mathrm{CSH}}^{(0)}[\phi, \phi]} \Big) \abs{\phi} \\ \leq & C \tau \cosh y \abs{\phi}^{2} (\cosh y \abs{\bfN \phi} + \abs{\bfT \phi} ). \end{align*} We now take the $\wnrm{\cdot}_{L^{p}_{\tau}}$ norm and apply H\"older's inequality, where we estimate $\cosh y \bfN \phi$ and $\bfT \phi$ using $\wnrm{\cdot}_{L^{p}_{\tau}}$ and $\cosh y \abs{\phi}^{2}$ using $\wnrm{\cdot}_{L^{\infty}_{\tau}}$. By Lemma~\ref{lem:BA:KlSob}, the desired inequality \eqref{eq:comm-NZ} follows. As a dividend (see the remarks at the beginning of the proof), we may now use the bounds in Lemma~\ref{lem:BA:NZ} up to $m = 1$. For $m \geq 2$, using Proposition~\ref{prop:comm-J-CSH} and \eqref{eq:key-comm-NZ}, $\cosh y \abs{[\bfN, \bfZ] \phi}$ is bounded from the above by \begin{equation} \label{eq:comm-NZ:CSH} \begin{aligned} & C_{m} \sum_{j_{1} + j_{2} + k_{2} \leq m-1} \tau \cosh y \Big( \cosh y \abs{\iota_{N} \Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(j_{1})} \phi, \bfZ^{(j_{2})} \phi]} + \abs{\Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(j_{1})} \phi, \bfZ^{(j_{2})} \phi] } \Big) \abs{\bfZ^{(k_{2})} \phi} \\ & + C_{m} \sum_{\substack{k_{1}, k_{2}, \ell : \\ k_{1} + k_{2} \leq m-1 \\ 1 \leq \ell \leq k_{1}}} \sum_{j_{1} + \cdots + j_{2 \ell + 2} \leq k_{1} - \ell} \tau \cosh^{3} y \abs{\Gamma_{\mathrm{CSH}}^{(\ell)}[\bfZ^{(j_{1})} \phi, \ldots, \bfZ^{(j_{2 \ell + 2})} \phi]} \abs{\bfZ^{(k_{2})} \phi}, \end{aligned} \end{equation} where we used the crude bound \begin{equation} \label{eq:crude-Gmm-N} \cosh y \abs{\iota_{N} \Gamma} + \abs{\Gamma} \leq C \cosh^{2} y \abs{\Gamma} \end{equation} on the second line\footnote{Such a simplifying procedure is possible in this case since there is enough factors of $\bfZ^{(j)} \phi$ to absorb the extra weights of $\cosh y$; see the discussion following \eqref{eq:comm-NZ:CSH-nonmain}.}. In what follows, we treat the sum on each line separately. Consider a summand from the first line of \eqref{eq:comm-NZ:CSH}, which can be bounded using \eqref{eq:ptwise-CSH:X} by \begin{equation} \label{eq:comm-NZ:CSH-main} C_{m} \tau \cosh y \abs{\bfZ^{(j_{1})} \phi} ( \cosh y \abs{\bfN \bfZ^{(j_{2})} \phi} + \abs{\bfT \bfZ^{(j_{2})} \phi}\Big) \abs{\bfZ^{(k_{2})} \phi}. \end{equation} Taking the $\wnrm{\cdot}_{L^{p}_{\tau}}$ norm, we apply H\"older's inequality and bound the highest order factor with an appropriate weight of $\cosh y$ (i.e., either $\cosh y \bfZ^{(k)} \phi$, $\cosh y \bfN \bfZ^{(k)} \phi$ or $\bfT \bfZ^{(k)} \phi$) using $\wnrm{\cdot}_{L^{p}_{\tau}}$ and the rest using $\wnrm{\cdot}_{L^{\infty}_{\tau}}$. Since we are only considering $m = 2, 3$, the non-highest order cannot exceed $1$; hence the non-highest order factors obey a pointwise upper bound $C \tau^{-1} \cosh^{-1} y \log^{3}(1+\tau)$ by Lemma~\ref{lem:BA:KlSob} and \eqref{eq:NZ-Linfty} for $0 \leq m \leq 1$. From such a consideration, the contribution of \eqref{eq:comm-NZ:CSH-main} is easily seen to be acceptable for the proof of \eqref{eq:comm-NZ}. Next, consider a summand from the second line of \eqref{eq:comm-NZ:CSH}, which can be bounded by \begin{equation} \label{eq:comm-NZ:CSH-nonmain} C_{m} \tau^{\ell+1} \cosh^{\ell+2} y \abs{\bfZ^{(j_{1})} \phi} \cdots \abs{\bfZ^{(j_{2 \ell + 1})} \phi} \abs{\bfT \bfZ^{(j_{2 \ell + 2}) } \phi} \abs{\bfZ^{(k_{2})} \phi}, \end{equation} using \eqref{eq:ptwise-CSH}. As before, we take the $\wnrm{\cdot}_{L^{p}_{\tau}}$ norm and apply H\"older's inequality to bound the highest order factor (with an appropriate weight of $\cosh y$) using $\wnrm{\cdot}_{L^{p}_{\tau}}$ and the rest using $\wnrm{\cdot}_{L^{\infty}_{\tau}}$. As in the preceding case, the non-highest order cannot exceed $1$, so the non-highest order factors are bounded from the above by $C \tau^{-1} \cosh^{-1} y \log^{3}(1+\tau)$. Since there are $2\ell+2$ such factors (where $\ell \geq 1$), it can be readily checked that the contribution of \eqref{eq:comm-NZ:CSH-nonmain} is acceptable as well. \paragraph*{\bfseries - Case 2: $J = J_{\mathrm{CSD}}$} In this case, we immediately apply \eqref{eq:key-comm-NZ} and the crude bound \eqref{eq:crude-Gmm-N} to estimate \begin{equation*} \cosh y \abs{[\bfN, \bfZ^{(m)}] \phi} \leq C_{m} \tau \cosh^{3} y \sum_{k_{1} + k_{2} \leq m-1} \abs{{}^{(A)}\calL^{(k_{1})}_{Z} J_{\mathrm{CSD}}(\phi)} \abs{\bfZ^{(k_{2})} \phi}. \end{equation*} By Proposition~\ref{prop:comm-J-CSD} and Lemma~\ref{lem:ptwise-CSD}, it follows that \begin{equation} \label{eq:comm-NZ:CSD} \cosh y \abs{[\bfN, \bfZ^{(m)}] \phi} \leq C_{m} \sum_{j_{1} + j_{2} + k_{2} \leq m-1} \tau \cosh^{3} y \abs{\bfZ^{(j_{1})} \phi} \abs{\bfZ^{(j_{2})} \phi} \abs{\bfZ^{(k_{2})} \phi}. \end{equation} From this pointwise bound, which is far simpler than the case of \eqref{eq:CSH}, it is straightforward to prove \eqref{eq:comm-NZ} (hence Lemma~\ref{lem:BA:NZ} as well); we omit the details. \end{proof} The bounds derived so far are insufficient to close the bootstrap; in particular, they cannot be used to bound the commutator $[\bfZ^{(m)}, {}^{(A)} \Box - 1] \phi$, because they grows too fast in $\tau$. The cost we incur is (at least) $\log^{2} (1+\tau)$, which arises from the loss of two $\bfZ$ derivatives in application of the Klainerman--Sobolev inequality. To estimate the commutator $[\bfZ^{(m)}, {}^{(A)} \Box - 1] \phi$, we use the sharp $L^{p}$ bounds established in the following lemma, which are proved by essentially interpolating between the $L^{2}$ bounds \eqref{eq:BA:L2}, \eqref{eq:BA:L2:S} and the sharp $L^{\infty}$ decay \eqref{eq:BA:Linfty}. \begin{lemma} \label{lem:sharpLp} Suppose that $(A, \phi)$ satisfies the bootstrap assumptions \eqref{eq:BA:L2}, \eqref{eq:BA:L2:S} and \eqref{eq:BA:Linfty}. If $\epsilon_{1} > 0$ is sufficiently small, then the following inequalities hold: For $0 \leq m \leq 2$, we have \begin{align} \label{eq:sharpL3} \wnrm{\cosh y \bfZ^{(m)} \phi}_{L^{3}_{\tau}} + \wnrm{\cosh y \, \bfN \bfZ^{(m)} \phi}_{L^{3}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{3}_{\tau}} \leq & C \epsilon_{1} \, \tau^{-\frac{1}{3}} \log^{m} (1+\tau). \end{align} Moreover, for $0 \leq m \leq 1$, we have \begin{align} \wnrm{\cosh y \bfZ^{(m)} \phi}_{L^{4}_{\tau}} +\wnrm{\cosh y \, \bfN \bfZ^{(m)} \phi}_{L^{4}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{4}_{\tau}} \leq & C \epsilon_{1} \, \tau^{-\frac{1}{2}} \log^{m} (1+\tau) \label{eq:sharpL4}\\ \wnrm{\cosh y \bfZ^{(m)} \phi}_{L^{6}_{\tau}} + \wnrm{\cosh y \, \bfN \bfZ^{(m)} \phi}_{L^{6}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{6}_{\tau}} \leq & C \epsilon_{1} \, \tau^{-\frac{2}{3}} \log^{m} (1+\tau). \label{eq:sharpL6} \end{align} \end{lemma} \begin{remark} Heuristically, the bootstrap assumptions \eqref{eq:BA:L2}, \eqref{eq:BA:L2:S}, \eqref{eq:BA:Linfty} and the sharp $L^{p}$ bounds \eqref{eq:sharpL3}, \eqref{eq:sharpL4}, \eqref{eq:sharpL6} can be conveniently summarized as follows: \begin{itemize} \item $\wnrm{\cosh y \, \phi}_{L^{p}_{\tau}} \leq \tau^{-1+\frac{2}{p}}$ for $2 \leq p \leq \infty$; \item every $\bfZ$ costs $\log (1+\tau)$; \item $\bfN \bfZ^{(m)} \phi$ and $(\cosh y)^{-1} \bfT \bfZ^{(m)} \phi$ obey the same bounds as $\bfZ^{(m)} \phi$. \end{itemize} In reality, we only establish such bounds for certain exponents $p$ and $m$, but these suffice for our application below. \end{remark} \begin{proof} First, by applying \eqref{eq:GN} to $\phi$ with \eqref{eq:BA:L2} and \eqref{eq:BA:Linfty}, the estimates \eqref{eq:sharpL3}, \eqref{eq:sharpL4} and \eqref{eq:sharpL6} for $\bfZ^{(m)} \phi$ follow immediately. Next, applying the pointwise inequality \eqref{eq:T-N-Z} for $\tilde{\phi} = \bfZ^{(m)} \phi$, we obtain \begin{equation*} \abs{\bfT \bfZ^{(m)} \phi} \leq C \cosh y (\abs{\bfN \bfZ^{(m)} \phi} + \tau^{-1} \abs{\bfZ^{(m+1)} \phi}). \end{equation*} Recall that the last term on the right obeys the bound \eqref{eq:Z-lower-order}, which is stronger than what we need to prove \eqref{eq:sharpL3}, \eqref{eq:sharpL4} and \eqref{eq:sharpL6} for $\bfT \bfZ^{(m)} \phi$. Therefore, to prove the lemma, it only remains to establish the estimates \eqref{eq:sharpL3}, \eqref{eq:sharpL4} and \eqref{eq:sharpL6} for $\bfN \bfZ^{(m)} \phi$. Using \eqref{eq:GN} to interpolate the $L^{2}_{\tau}$ bound \eqref{eq:BA:L2:S} and the sharp decay estimate \eqref{eq:BA:Linfty} for $\bfN \phi$, it follows that \begin{align*} \wnrm{\cosh y \bfZ^{(m)} \bfN \phi}_{L^{p}_{\tau}} \leq & C \epsilon_{1} \, \tau^{-1+\frac{2}{p}} \log^{m} (1+\tau) \end{align*} for $p = 3, 4, 6$ when $0 \leq m \leq 1$, and $p = 3$ when $m = 2$. Applying Lemma~\ref{lem:BA:comm-NZ}, and using \eqref{eq:BA:L2} and Lemma~\ref{lem:BA:NZ} to bound the right-hand side of \eqref{eq:comm-NZ}, it follows that the commutator $[\bfN, \bfZ^{(m)}] \phi$ obeys stronger estimates than needed to establish \eqref{eq:sharpL3}, \eqref{eq:sharpL4} and \eqref{eq:sharpL6}. This completes the proof of the lemma. \qedhere \end{proof} We separately state the following lemma, which is a trivial consequence of Lemma \ref{lem:sharpLp} and H\"older's inequality, since it will be used a number of times below. \begin{lemma} \label{lem:BA:bilin} For non-negative integers $k_{1}, k_{2}$ such that $k_{1} + k_{2} \leq 2$, we have \begin{equation} \label{eq:bilinL2} \wnrm{\, \abs{\bfT \bfZ^{(k_{1})} \phi} \abs{\bfT \bfZ^{(k_{2})} \phi} \, }_{L^{2}_{\tau}} \leq C \epsilon_{1}^{2} \tau^{-1} \log^{k_{1}+k_{2}} (1+\tau). \end{equation} \end{lemma} \subsection{Improving the nonlinearity estimates} Our next goal is to obtain the following improvement of \eqref{eq:BA:KG}: For $0 \leq m \leq 4$ we wish to show that \begin{equation} \label{eq:BA:KG:improved} \wnrm{\cosh y ({}^{(A)} \Box - 1) \bfZ^{(m)} \phi}_{L^{2}_{\tau}} \leq C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau). \end{equation} In particular, note that the power of $\epsilon_{1}$ is $3$ in \eqref{eq:BA:KG:improved}, in contrast to $3-$ in \eqref{eq:BA:KG}; hence for a sufficiently small $\epsilon_{1}$ (independent of $T$), \eqref{eq:BA:KG:improved} would imply \begin{equation*} \wnrm{\cosh y ({}^{(A)} \Box - 1) \bfZ^{(m)} \phi}_{L^{2}_{\tau}} \leq \epsilon_{1}^{3-} \tau^{-1} \log^{m-1} (1+\tau). \end{equation*} which improves \eqref{eq:BA:KG}. By \eqref{eq:CS-uni} and the commutation formula \eqref{eq:comm-covKG}, we have the schematic formula \begin{equation*} \begin{aligned} ({}^{(A)} \Box - 1) \bfZ^{(m)} \phi =& \bfZ^{(m)} U(\phi) + \sum_{k_{1}+k_{2} \leq m-1} \mathfrak{N}_{1}[{}^{(A)}\calL_{Z}^{(k_{1})} J, \bfZ^{(k_{2})} \phi] \\ & + \sum_{k_{1}+k_{2} \leq m-1} \mathfrak{N}_{2}[{}^{(A)}\calL_{Z}^{(k_{1})} J, \bfZ^{(k_{2})} \phi] + \sum_{k_{1}+k_{2} \leq m-1} \mathfrak{N}_{3}[{}^{(A)}\calL_{Z}^{(k_{1})} J, \bfZ^{(k_{2})} \phi] \\ & + \sum_{k_{1}+k_{2}+k_{3} \leq m-2} \mathfrak{N}_{4}[{}^{(A)}\calL_{Z}^{(k_{1})} J, {}^{(A)}\calL_{Z}^{(k_{2})} J, \bfZ^{(k_{3})} \phi], \end{aligned} \end{equation*} where the last four terms are dropped in the case $m = 0$, and the last term is dropped when $m = 1$. Hence, in order to establish \eqref{eq:BA:KG:improved}, we need to bound $\bfZ \itr{m} U(\phi)$ and each $\mathfrak{N}_{i}$ in $\wnrm{\cosh y (\cdot)}_{L^{2}_{\tau}}$. We achieve this task for \eqref{eq:CSH} and \eqref{eq:CSD} separately. In what follows, we assume that $(A, \phi)$ obeys the bootstrap assumptions \eqref{eq:BA:L2}--\eqref{eq:BA:KG}. \subsubsection{Chern--Simons--Higgs equations} We begin by handling the contribution of the $V$-valued potential $U_{\mathrm{CSH}}$. Recall from Section~\ref{subsec:comm-U} that we may write $U_{\mathrm{CSH}}(\phi) = U_{3}[\phi, \phi, \phi] + U_{5}[\phi, \phi, \phi, \phi, \phi]$, where $U_{3}$ and $U_{5}$ obey the Leibniz rules in Lemma~\ref{lem:comm-U-CSH}, and the pointwise bounds in Lemma~\ref{lem:ptwise-U}. Using the simple bounds in Lemma~\ref{lem:BA:KlSob} and H\"older's inequality, the following improved bounds can be shown: \begin{proposition} \label{prop:U-CSH} Let $(A, \phi)$ obey the bootstrap assumptions in Section~\ref{subsec:BAs}. Then for $0 \leq m \leq 4$, we have \begin{align} \wnrm{\cosh y \bfZ^{(m)} U_{3}[\phi, \phi, \phi]}_{L^{2}_{\tau}} \leq & C \epsilon_{1}^{3} \tau^{-2+} \\ \wnrm{\cosh y \bfZ^{(m)} U_{5}[\phi, \phi, \phi, \phi, \phi]}_{L^{2}_{\tau}} \leq & C \epsilon_{1}^{5} \tau^{-4+}. \end{align} \end{proposition} We omit the straightforward details. These bounds show that the contribution of $U_{\mathrm{CSH}}$ is acceptable for the proof of \eqref{eq:BA:KG:improved}. Next, we turn to the terms $\mathfrak{N}_{j}$. To complete the proof of \eqref{eq:BA:KG:improved}, it suffices to establish the following bounds: \begin{proposition} \label{prop:N-CSH} Let $(A, \phi)$ obey the bootstrap assumptions in Section~\ref{subsec:BAs}. Then for $0 \leq m \leq 4$, we have \begin{align} \sum_{k_{1}+k_{2} \leq m-1} \wnrm{\cosh y \, \mathfrak{N}_{1}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSH}}, \bfZ^{(k_{2})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau) \label{eq:CSH:N1} \\ \sum_{k_{1}+k_{2} \leq m-1} \wnrm{\cosh y \, \mathfrak{N}_{2}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSH}}, \bfZ^{(k_{2})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau) \label{eq:CSH:N2} \\ \sum_{k_{1}+k_{2} \leq 3} \wnrm{\cosh y \, \mathfrak{N}_{3}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSH}}, \bfZ^{(k_{2})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-2+} \label{eq:CSH:N3} \\ \sum_{k_{1}+k_{2}+k_{3} \leq 2} \wnrm{\cosh y \, \mathfrak{N}_{4}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSH}}, {}^{(A)}\calL_{Z}^{(k_{2})} J_{\mathrm{CSH}}, \bfZ^{(k_{3})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{5} \tau^{-2+} \label{eq:CSH:N4} \end{align} \end{proposition} To prove this proposition, it is useful to first establish the following preliminary bilinear estimates, which may be done only using the bounds in Lemmas~\ref{lem:BA:KlSob} and \ref{lem:BA:NZ}. \begin{lemma} \label{lem:CSH:weakBilin} Let $(A, \phi)$ obey the bootstrap assumptions in Section~\ref{sec:BA}. Then we have \begin{align} \sum_{k_{1}+k_{2} \leq 3} \wnrm{\cosh y \, \abs{\Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi]} \, }_{L^{2}_{\tau}} \leq & C \epsilon_{1}^{2} \tau^{-1+}, \label{eq:CSH:Gmm0} \\ \sum_{k_{1}+k_{2} \leq 2} \wnrm{\cosh y \, \abs{{}^{(A)}\calL_{Z} \Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi]} \, }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{2} \tau^{-1+}, \label{eq:CSH:ZGmm0} \\ \sum_{k_{1}+k_{2} \leq 2} \wnrm{{}^{(A)}\ud \Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi]}_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{2} \tau^{-1+}, \label{eq:CSH:dGmm0} \\ \sum_{k_{1}+k_{2} \leq 2} \wnrm{{}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi]}_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{2} \tau^{-1+}. \label{eq:CSH:dltGmm0} \end{align} \end{lemma} \begin{proof} All four bounds are proved in the same way: First, apply formulae and pointwise bounds in Section~\ref{sec:comm}, then use H\"older's inequality, Lemmas~\ref{lem:BA:KlSob} and \ref{lem:BA:NZ}, as well as \eqref{eq:BA:L2}. We give a detailed proof only for \eqref{eq:CSH:dltGmm0}, which is the most involved, and leave the rest to the reader. Fix a pair of non-negative intergers $k_{1}, k_{2}$ such that $k_{1} + k_{2} \leq 2$. By \eqref{eq:covdlt-CSH} and the pointwise bounds in Section~\ref{subsec:ptwise}, we have \begin{align*} & \hskip-2em \abs{{}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)} [\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi]} \\ \leq & C \Big( \abs{\bfT \bfZ^{(k_{1})} \phi} \abs{\bfT \bfZ^{(k_{2})} \phi} + \abs{\bfZ^{(k_{1})} \phi} \abs{({}^{(A)} \Box - 1) \bfZ^{(k_{2})} \phi} + \abs{\bfZ^{(k_{1})} \phi} \abs{\bfZ^{(k_{2})}\phi} \Big). \end{align*} We bound the $\wnrm{\cdot}_{L^{2}_{\tau}}$ norm of the preceding expression as follows: The contribution of the first term is treated by applying H\"older's inequality, bounding the higher order factor in $\wnrm{\cdot}_{L^{2}_{\tau}}$ and the other in $\wnrm{\cdot}_{L^{\infty}_{\tau}}$, and then appealing to \eqref{eq:BA:L2} and Lemma~\ref{lem:BA:NZ}. The third term is handled similarly, where we replace Lemma~\ref{lem:BA:NZ} by Lemma~\ref{lem:BA:KlSob}. Finally, for the second term, we bound $\bfZ^{(k_{1})} \phi$ in $\wnrm{\cdot}_{L^{\infty}_{\tau}}$ and $({}^{(A)} \Box - 1) \bfZ^{(k_{2})} \phi$ in $\wnrm{\cdot}_{L^{2}_{\tau}}$, then use Lemma~\ref{lem:BA:KlSob} for the former and \eqref{eq:BA:KG} for the latter. \qedhere \end{proof} We are now ready to prove \eqref{eq:CSH:N1}--\eqref{eq:CSH:N4}, which would prove Proposition~\ref{prop:N-CSH}. \begin{proof}[Proof of \eqref{eq:CSH:N1}] In order to prove \eqref{eq:CSH:N1}, it suffices by \eqref{eq:comm-J-CSH} and the facts that $\epsilon_{1} \leq 1$, $\tau \geq 2R$ to establish the following bounds: For $1 \leq m \leq 4$, \begin{align} \sum_{k_{1}+k_{2}+k_{3} \leq m-1} \wnrm{\cosh y \,\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(0)}[\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi], \bfZ^{(k_{3})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau) \label{eq:CSH:N1:1} \\ \sum_{k_{1}+\cdots+k_{5} \leq 2} \wnrm{\cosh y \,\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(1)}[\bfZ^{(k_{1})} \phi, \ldots, \bfZ^{(k_{4})} \phi], \bfZ^{(k_{5})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{5} \tau^{-2+} \label{eq:CSH:N1:2}\\ \sum_{k_{1}+\cdots+k_{7} \leq 1} \wnrm{\cosh y \,\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(2)}[\bfZ^{(k_{1})} \phi, \ldots, \bfZ^{(k_{6})} \phi], \bfZ^{(k_{7})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{7} \tau^{-3+} \label{eq:CSH:N1:3}\\ \wnrm{\cosh y \,\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(3)}[\phi, \phi, \ldots, \phi], \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{9} \tau^{-4+} \label{eq:CSH:N1:4} \end{align} To simplify the notation, we will often use the shorthand $\phi^{j} = \bfZ^{(k_{j})} \phi$ in what follows. \paragraph*{\bfseries - Proof of \eqref{eq:CSH:N1:1}} By \eqref{eq:ptwise-N1} and \eqref{eq:covud-CSH}, we first derive the following pointwise bound: \begin{align*} \cosh y \abs{\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(0)}[\phi^{1}, \phi^{2}], \phi^{3}} \leq & C \tau \cosh y \abs{\iota_{N} {}^{(A)}\ud \Gamma_{\mathrm{CSH}}^{(0)}[\phi^{1}, \phi^{2}]}\abs{\phi^{3}} \\ \leq & C \tau \cosh y \Big( \abs{\bfT \phi^{1}} \abs{\bfN \phi^{2}} + \abs{\bfN \phi^{1}} \abs{\bfT \phi^{2}} + \abs{\phi} \abs{\phi} \abs{\phi^{1}} \abs{\phi^{2}} \Big)\abs{\phi^{3}} . \end{align*} We now take the $\wnrm{\cdot}_{L^{2}_{\tau}}$ norm. The cubic terms can be estimated by $C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau)$, using \eqref{eq:BA:L2}, \eqref{eq:NZ-L2} and Lemma \ref{lem:sharpLp}. The quintic term can be easily bounded by $\epsilon_{1}^{5} \tau^{-4+}$ using \eqref{eq:BA:L2} for the highest order factor and \eqref{eq:weakLinfty} for the rest. The point is that all factors except the highest order factor have at most two $\bfZ$ derivatives, and thus \eqref{eq:weakLinfty} is applicable. \paragraph*{\bf - Proof of \eqref{eq:CSH:N1:2}, \eqref{eq:CSH:N1:3} and \eqref{eq:CSH:N1:4}} For the remaining cases \eqref{eq:CSH:N1:2}, \eqref{eq:CSH:N1:3} and \eqref{eq:CSH:N1:4}, the nonlinearity is quintic or higher and the total number of $\bfZ$ derivatives is $\leq 2$. These cases turn out to be much less delicate compared to \eqref{eq:CSH:N1:1}, and can be treated using just \eqref{eq:BA:L2}, \eqref{eq:weakLinfty} (in Lemma \ref{lem:BA:KlSob}), Lemma~\ref{lem:BA:bilin} and Lemma~\ref{lem:CSH:weakBilin}. Furthermore, we may rely on the crude pointwise bound \eqref{eq:ptwise-easy} to treat $\iota_{Z} \star$. Since the arguments are similar, we only present the case of \eqref{eq:CSH:N1:2} in detail, and briefly sketch the others. To prove \eqref{eq:CSH:N1:2}, we distinguish two types of terms, namely those which do not involve commutation of ${}^{(A)}\ud$ with $\iota_{Z} \star$, and those which arise from this commutation. More precisely, by \eqref{eq:Gmm-CSH-1}, \eqref{eq:ptwise-easy}, \eqref{eq:ptwise-N1-easy} and Lemma~\ref{lem:d-i-star}, we first bound $\cosh y \abs{\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(1)}[\phi^{1}, \ldots, \phi^{4}], \phi^{5}]}$ by \begin{align} & C\tau \cosh^{2} y \abs{{}^{(A)}\ud \phi^{1} \wedge (\iota_{Z} \star \Gamma_{\mathrm{CSH}}^{(0)}[\phi^{3}, \phi^{4}]) \wedge \phi^{2}} \abs{\phi^{5} } \label{eq:CSH:N1:2:1} \\ & + C\tau \cosh^{2} y \abs{\phi^{1} \wedge (\iota_{Z} \star \Gamma_{\mathrm{CSH}}^{(0)}[\phi^{3}, \phi^{4}]) \wedge {}^{(A)}\ud \phi^{2}} \abs{\phi^{5} } \label{eq:CSH:N1:2:1.5} \\ & + C\tau \cosh^{2} y \abs{\phi^{1}} \abs{\phi^{2}} \abs{\iota_{Z} \star {}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)}[\phi^{3}, \phi^{4}] } \abs{\phi^{5}} \label{eq:CSH:N1:2:2} \\ & + C\tau \cosh^{2} y \abs{\phi^{1}} \abs{\phi^{2}} \abs{\star {}^{(A)}\calL_{Z} \Gamma_{\mathrm{CSH}}^{(0)}[\phi^{3}, \phi^{4}] } \abs{\phi^{5}} \label{eq:CSH:N1:2:3} \end{align} where we recall that $k_{1} + \cdots k_{5} \leq 2$. Note that \eqref{eq:CSH:N1:2:1} and \eqref{eq:CSH:N1:2:1.5} are precisely the terms where ${}^{(A)}\ud$ does not fall on $\iota_{Z} \star$, whereas \eqref{eq:CSH:N1:2:2} and \eqref{eq:CSH:N1:2:3} arise from commuting ${}^{(A)}\ud$ with $\iota_{Z} \star$ using Lemma~\ref{lem:d-i-star}. For \eqref{eq:CSH:N1:2:1} and \eqref{eq:CSH:N1:2:1.5}, we first apply \eqref{eq:ptwise-easy} to derive the pointwise estimate \begin{align*} \eqref{eq:CSH:N1:2:1} + \eqref{eq:CSH:N1:2:1.5} \leq & C\tau^{2} (\cosh y)^{3} (\abs{\bfT \phi^{1}}\abs{\phi^{2}} + \abs{\phi^{1}}\abs{\bfT \phi^{2}}) \abs{\phi^{3}} \abs{\bfT \phi^{4}} \abs{\phi^{5}} \end{align*} and then estimate the $\wnrm{\cdot}_{L^{2}_{\tau}}$ norm of the right-hand side by $C\epsilon_{1}^{5} \tau^{-2+}$, using Lemma \ref{lem:BA:bilin} for the factors with $\bfT$ and \eqref{eq:weakLinfty} for the rest. For \eqref{eq:CSH:N1:2:2}, we begin with the pointwise bound \begin{align*} \eqref{eq:CSH:N1:2:2} \leq & C\tau^{2} (\cosh y)^{3} \abs{\phi^{1}} \abs{\phi^{2}} \abs{{}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)} [\phi^{3}, \phi^{4}]} \abs{\phi^{5}}, \end{align*} which follows from \eqref{eq:ptwise-easy}, and then estimate the $\wnrm{\cdot}_{L^{2}_{\tau}}$ norm of the right-hand side by $C \epsilon_{1}^{5} \tau^{-2+} $ using \eqref{eq:weakLinfty} and \eqref{eq:CSH:dltGmm0}. Finally, for \eqref{eq:CSH:N1:2:3}, we apply \eqref{eq:ptwise-easy} to estimate \begin{align*} \eqref{eq:CSH:N1:2:3} \leq & C\tau (\cosh y)^{2} \abs{\phi^{1}} \abs{\phi^{2}} \abs{{}^{(A)}\calL_{Z} \Gamma_{\mathrm{CSH}}^{(0)} [\phi^{3}, \phi^{4}]} \abs{\phi^{5}}, \end{align*} and then the $\wnrm{\cdot}_{L^{2}_{\tau}}$ norm of the right-hand side is estimated by $C \epsilon_{1}^{5} \tau^{-3+}$ using \eqref{eq:weakLinfty} and \eqref{eq:CSH:ZGmm0}. Now we sketch the proofs of \eqref{eq:CSH:N1:3} and \eqref{eq:CSH:N1:4}. For \eqref{eq:CSH:N1:3}, we begin by estimating $\cosh y \abs{\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(2)}[\phi^{1}, \ldots, \phi^{6}], \phi^{7}]}$ by \begin{align} & C \tau^{3} (\cosh y)^{4} \Big( \abs{\bfT \phi^{1}} \abs{\phi^{2}} \abs{\phi^{3}} \abs{\phi^{4}} + \cdots + \abs{\phi^{1}} \abs{\phi^{2}} \abs{\phi^{3}} \abs{\bfT \phi^{4}} \Big) \abs{\phi^{5}} \abs{\bfT \phi^{6}} \abs{\phi^{7}} \label{eq:CSH:N1:3:1} \\ & + C \tau \cosh^{2} y \abs{\phi^{1}} \cdots \abs{\phi^{4}} \abs{(\iota_{Z} \star)^{2} {}^{(A)}\ud \Gamma^{(0)}_{\mathrm{CSH}} [\phi^{5}, \phi^{6}]} \abs{\phi^{7}} \label{eq:CSH:N1:3:2} \\ & + C\tau \cosh^{2} y \abs{\phi^{1}} \cdots \abs{\phi^{4}} \abs{\star {}^{(A)}\calL_{Z} \iota_{Z} \star \Gamma^{(0)}_{\mathrm{CSH}} [\phi^{5}, \phi^{6}]} \abs{\phi^{7}} \label{eq:CSH:N1:3:3} \\ & + C \tau \cosh^{2} y \abs{\phi^{1}} \cdots \abs{\phi^{4}} \abs{\iota_{Z} (\Gamma^{(0)}_{\mathrm{CSH}} [\phi^{5}, \phi^{6}] \wedge \mathrm{d} Z^{\flat})} \abs{\phi^{7}} . \label{eq:CSH:N1:3:4} \end{align} Note that \eqref{eq:CSH:N1:3:1} is precisely the contribution of the terms with ${}^{(A)}\ud$ not falling on $(\iota_{Z} \star)^{2}$ (where we used the trivial bounds \eqref{eq:ptwise-easy} to simplify the expression), whereas \eqref{eq:CSH:N1:3:2}--\eqref{eq:CSH:N1:3:4} arise from commuting ${}^{(A)}\ud$ with $(\iota_{Z} \star)^{2}$ using Lemma~\ref{lem:d-i-star} (twice). Recall that $k_{1} + \cdots + k_{7} \leq 1$. We estimate $\wnrm{\eqref{eq:CSH:N1:3:1}}_{L^{2}_{\tau}} \leq C \epsilon_{1}^{7} \tau^{-3+}$ using \eqref{eq:weakLinfty}, \eqref{eq:ptwise-easy} and Lemma \ref{lem:BA:bilin}, as in the case of \eqref{eq:CSH:N1:2:1}. Furthermore, using \eqref{eq:weakLinfty}, \eqref{eq:CSH:Gmm0}, \eqref{eq:CSH:ZGmm0}, \eqref{eq:CSH:dGmm0} and \eqref{eq:ptwise-easy}, we have $\wnrm{\eqref{eq:CSH:N1:3:2}+\eqref{eq:CSH:N1:3:4}}_{L^{2}_{\tau}} \leq C \epsilon_{1}^{7} \tau^{-3+}$ and $\wnrm{\eqref{eq:CSH:N1:3:3}}_{L^{2}_{\tau}} \leq C \epsilon_{1}^{7} \tau^{-4+}$, from which \eqref{eq:CSH:N1:3} follows. It only remains to prove \eqref{eq:CSH:N1:4}. Using \eqref{eq:ptwise-N1-easy}, we bound $\cosh y \abs{\mathfrak{N}_{1}[\Gamma_{\mathrm{CSH}}^{(3)}[\phi, \ldots, \phi], \phi]}$ by \begin{align} & C \tau^{4} (\cosh y)^{5} \abs{\bfT \phi}^{2} \abs{\phi}^{7} \label{eq:CSH:N1:4:1} \\ & + C\tau \cosh^{2} y \abs{\phi}^{7} \Big( \abs{(\iota_{Z} \star)^{3} {}^{(A)}\dlt \Gamma_{\mathrm{CSH}}^{(0)}[\phi, \phi]} + C\abs{\iota_{Z} ((\iota_{Z} \star \Gamma_{\mathrm{CSH}}^{(0)}[\phi, \phi]) \wedge \mathrm{d} Z^{\flat})} \Big) \label{eq:CSH:N1:4:2} \\ & + C \tau \cosh^{2} y \abs{\phi}^{7} \Big( \abs{\iota_{Z} \star \iota_{Z} {}^{(A)}\calL_{Z} \Gamma_{\mathrm{CSH}}^{(0)}[\phi, \phi]} + C \abs{\star {}^{(A)}\calL_{Z} (\iota_{Z} \star)^{2} \Gamma_{\mathrm{CSH}}^{(0)}[\phi, \phi]} \Big). \label{eq:CSH:N1:4:3} \end{align} where \eqref{eq:CSH:N1:4:1} is again the contributions of the terms with ${}^{(A)}\ud$ not falling on $(\iota_{Z} \star)^{3}$, and \eqref{eq:CSH:N1:4:2}--\eqref{eq:CSH:N1:4:3} arise from applying Lemma~\ref{lem:d-i-star} (three times). We have $\wnrm{\eqref{eq:CSH:N1:4:1}} \leq C \epsilon_{1}^{9} \tau^{-4+}$ using Lemma \ref{lem:BA:bilin}, \eqref{eq:weakLinfty} and \eqref{eq:ptwise-easy}, again as in \eqref{eq:CSH:N1:2:1}. Using \eqref{eq:weakLinfty}, \eqref{eq:CSH:Gmm0}, \eqref{eq:CSH:ZGmm0}, \eqref{eq:CSH:dltGmm0} and \eqref{eq:ptwise-easy}, we also have $\wnrm{\eqref{eq:CSH:N1:4:2}} \leq C \epsilon_{1}^{9} \tau^{-4+}$ and $\wnrm{\eqref{eq:CSH:N1:4:3}} \leq C \epsilon_{1}^{9} \tau^{-4+}$. The desired estimate \eqref{eq:CSH:N1:4} now follows, which concludes our proof of \eqref{eq:CSH:N1}. \qedhere \end{proof} \begin{proof}[Proof of \eqref{eq:CSH:N2}] This case obeys the same estimate as \eqref{eq:CSH:N1} (see \eqref{eq:CSH:N2:1}); however, since there is no need to compute ${}^{(A)}\ud J_{\mathrm{CSH}}$, the amount of work needed is much less than \eqref{eq:CSH:N1}. As in our proof of \eqref{eq:CSH:N1}, it suffices to establish the following bounds: For $1 \leq m \leq 4$, \begin{align} \sum_{k_{1}+k_{2}+k_{3} \leq m-1} \wnrm{\cosh y \,\mathfrak{N}_{2}[\Gamma_{\mathrm{CSH}}^{(0)}[\bfZ^{(k_{1})} \phi, \bfZ^{(k_{2})} \phi], \bfZ^{(k_{3})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau) \label{eq:CSH:N2:1} \\ \sum_{k_{1}+\cdots+k_{5} \leq 2} \wnrm{\cosh y \,\mathfrak{N}_{2}[\Gamma_{\mathrm{CSH}}^{(1)}[\bfZ^{(k_{1})} \phi, \ldots, \bfZ^{(k_{4})} \phi], \bfZ^{(k_{5})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{5} \tau^{-2+} \label{eq:CSH:N2:2}\\ \sum_{k_{1}+\cdots+k_{7} \leq 1} \wnrm{\cosh y \,\mathfrak{N}_{2}[\Gamma_{\mathrm{CSH}}^{(2)}[\bfZ^{(k_{1})} \phi, \ldots, \bfZ^{(k_{6})} \phi], \bfZ^{(k_{7})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{7} \tau^{-3+} \label{eq:CSH:N2:3}\\ \wnrm{\cosh y \,\mathfrak{N}_{2}[\Gamma_{\mathrm{CSH}}^{(3)}[\phi, \phi, \ldots, \phi], \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{9} \tau^{-4+} \label{eq:CSH:N2:4} \end{align} As before, we use the shorthand $\phi^{j} = \bfZ^{(k_{j})} \phi$ in what follows. \paragraph*{\bfseries - Proof of \eqref{eq:CSH:N2:1}} By \eqref{eq:ptwise-N1}, we have \begin{align*} \cosh y \abs{\mathfrak{N}_{2}[\Gamma_{\mathrm{CSH}}^{(0)}[\phi^{1}, \phi^{2}], \phi^{3}]} \leq & C \tau \cosh y \abs{\phi^{1}} (\abs{\bfN \phi^{2}} \abs{\bfT \phi^{3}} + \abs{\bfT \phi^{2}} \abs{\bfN \phi^{3}}), \end{align*} where $k_{1}+k_{2}+k_{3} \leq m-1$ and $1 \leq m \leq 4$. Then using \eqref{eq:BA:L2}, \eqref{eq:NZ-L2} and Lemma \ref{lem:sharpLp}, the $\wnrm{\cdot}_{L^{2}_{\tau}}$ norm of the right-hand side can be estimated by $C \epsilon_{1}^{3} \tau^{-1} \log^{(m-1)} (1 + \tau)$ as desired. \paragraph*{\bfseries - Proof of \eqref{eq:CSH:N2:2}, \eqref{eq:CSH:N2:3} and \eqref{eq:CSH:N2:4}} As in our preceding proof of \eqref{eq:CSH:N1:2}--\eqref{eq:CSH:N1:4}, there is more room in this case. This case can be treated using just \eqref{eq:weakLinfty} and Lemma \ref{lem:BA:bilin}, relying on the pointwise bounds \eqref{eq:ptwise-N2-easy} and \eqref{eq:ptwise-CSH}. We omit the straightforward details. \qedhere \end{proof} \begin{proof}[Proof of \eqref{eq:CSH:N3} and \eqref{eq:CSH:N4}] The estimates \eqref{eq:CSH:N3} and \eqref{eq:CSH:N4} are easier than the preceding cases, and can be proved with similar techniques as before. The key ingredients are: the pointwise bounds \eqref{eq:ptwise-N3} and \eqref{eq:ptwise-N4} for $\mathfrak{N}_{3}$ and $\mathfrak{N}_{4}$, respectively; Proposition~\ref{prop:comm-J-CSH}, which allows us to expand ${}^{(A)}\calL_{Z}^{(k)} J_{\mathrm{CSH}}$ in terms of $\Gamma_{\mathrm{CSH}}^{(\ell)}$ as in \eqref{eq:comm-J-CSH}; the general pointwise bound \eqref{eq:ptwise-CSH} for $\Gamma_{\mathrm{CSH}}^{(\ell)}$; Lemma~\ref{lem:BA:KlSob}, Lemma~\ref{lem:BA:NZ} and the bound \eqref{eq:BA:L2}. We omit the routine proof.\qedhere \end{proof} \subsubsection{Chern--Simons--Dirac equations} We now consider the case of \eqref{eq:CSD} and establish \eqref{eq:BA:KG:improved}. As before, we first handle the contribution of $U_{\mathrm{CSD}}$. Recall from Section~\ref{subsec:comm-U} that $U_{\mathrm{CSD}}(\phi) = \tilde{U}_{3}[\phi, \phi, \phi]$, where $\tilde{U}_{3}$ obeys the Leibniz rules in Lemma~\ref{lem:comm-U-CSD} and the pointwise bounds in Lemma~\ref{lem:ptwise-U}. Then by Lemma~\ref{lem:BA:KlSob} and H\"older's inequality, we obtain: \begin{proposition} \label{prop:U-CSD} Let $(A, \phi)$ obey the bootstrap assumptions in Section~\ref{subsec:BAs}. Then for $0 \leq m \leq 4$, we have \begin{equation} \wnrm{\cosh y \bfZ^{(m)} \tilde{U}_{3}[\phi, \phi, \phi]}_{L^{2}_{\tau}} \leq C \epsilon_{1}^{3} \tau^{-2+}. \end{equation} \end{proposition} We omit the details. Proposition~\ref{prop:U-CSD} shows that the contribution of $U_{\mathrm{CSD}}$ is acceptable for proving \eqref{eq:BA:KG:improved}. Next, we treat the terms $\mathfrak{N}_{j}$. As in the case of \eqref{eq:CSH}, it is sufficient to establish the following bounds hold for $\mathfrak{N}_{1}, \ldots, \mathfrak{N}_{4}$: \begin{proposition} \label{prop:N-CSD} Let $(A, \phi)$ obey the bootstrap assumptions in Section~\ref{subsec:BAs}. Then for $0 \leq m \leq 4$, we have \begin{align} \sum_{k_{1}+k_{2} \leq m-1} \wnrm{\cosh y \, \mathfrak{N}_{1}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSD}}, \bfZ^{(k_{2})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau), \label{eq:CSD:N1} \\ \sum_{k_{1}+k_{2} \leq m-1} \wnrm{\cosh y \, \mathfrak{N}_{2}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSD}}, \bfZ^{(k_{2})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau), \label{eq:CSD:N2} \\ \sum_{k_{1}+k_{2} \leq 3} \wnrm{\cosh y \, \mathfrak{N}_{3}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSD}}, \bfZ^{(k_{2})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{3} \tau^{-2+}, \label{eq:CSD:N3} \\ \sum_{k_{1}+k_{2}+k_{3} \leq 2} \wnrm{\cosh y \, \mathfrak{N}_{4}[{}^{(A)}\calL_{Z}^{(k_{1})} J_{\mathrm{CSD}}, {}^{(A)}\calL_{Z}^{(k_{2})} J_{\mathrm{CSD}}, \bfZ^{(k_{3})} \phi] }_{L^{2}_{\tau}} \leq& C \epsilon_{1}^{5} \tau^{-2+}. \label{eq:CSD:N4} \end{align} \end{proposition} We only give a sketch of the proof, since the method is not too different from the previous case of \eqref{eq:CSH}. In fact, the task of establishing these bounds is far simpler in the case of \eqref{eq:CSD}, thanks in large part to the absence of derivatives in $J_{\mathrm{CSD}}$. \begin{proof} [Sketch of proof] By Proposition~\ref{prop:comm-J-CSD}, it suffices to establish the following bounds involving $\Gamma_{\mathrm{CSD}}^{(\ell)}$: \begin{gather} \sum_{\ell + j_{1} + j_{2} + k \leq m-1} \wnrm{\cosh y \, \mathfrak{N}_{1}[\Gamma_{\mathrm{CSD}}^{(\ell)}[\phi^{1}, \phi^{2}], \tilde{\phi}] }_{L^{2}_{\tau}} \leq C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau), \label{eq:CSD:N1-Gmm} \\ \sum_{\ell + j_{1} + j_{2} + k \leq m-1} \wnrm{\cosh y \, \mathfrak{N}_{2}[\Gamma_{\mathrm{CSD}}^{(\ell)}[\phi^{1}, \phi^{2}], \tilde{\phi}] }_{L^{2}_{\tau}} \leq C \epsilon_{1}^{3} \tau^{-1} \log^{m-1} (1+\tau), \label{eq:CSD:N2-Gmm} \\ \sum_{\ell + j_{1} + j_{2} + k \leq 3} \wnrm{\cosh y \, \mathfrak{N}_{3}[\Gamma_{\mathrm{CSD}}^{(\ell)}[\phi^{1}, \phi^{2}], \tilde{\phi}] }_{L^{2}_{\tau}} \leq C \epsilon_{1}^{3} \tau^{-2+}, \label{eq:CSD:N3-Gmm} \\ \sum_{\ell_{1} + \ell_{2} + j_{1} + j_{2} + j_{3} + j_{4} + k \leq 2} \wnrm{\cosh y \, \mathfrak{N}_{4}[\Gamma_{\mathrm{CSD}}^{(\ell_{1})}[\phi^{1}, \phi^{2}], \Gamma_{\mathrm{CSD}}^{(\ell_{2})}[\phi^{3}, \phi^{4}], \tilde{\phi}] }_{L^{2}_{\tau}} \leq C \epsilon_{1}^{5} \tau^{-2+}, \label{eq:CSD:N4-Gmm} \end{gather} where we have used the shorthand $\phi^{i} = \bfZ^{(j_{i})} \phi$ for $i=1,2,3,4$ and $\tilde{\phi} = \bfZ^{(k)} \phi$. By Lemma~\ref{lem:ptwise-N} and \eqref{eq:ptwise-easy}, we may derive the following pointwise bounds: \begin{align*} \cosh y \abs{\mathfrak{N}_{1}[\Gamma_{\mathrm{CSD}}^{(\ell)}[\phi^{1}, \phi^{2}], \tilde{\phi}]} & \leq C \tau (\cosh y)^{2} \Big( \sum_{\set{i_{1}, i_{2}} = \set{1,2}} \abs{\bfT \phi^{i_{1}}}\abs{\phi^{i_{2}}} \Big) \abs{\tilde{\phi}} \\ \cosh y \abs{\mathfrak{N}_{2}[\Gamma_{\mathrm{CSD}}^{(\ell)}[\phi^{1}, \phi^{2}], \tilde{\phi}]} & \leq C \tau (\cosh y)^{2} \abs{\phi^{1}} \abs{\phi^{2}} \abs{\bfT \tilde{\phi}} \\ \cosh y \abs{\mathfrak{N}_{3}[\Gamma_{\mathrm{CSD}}^{(\ell)}[\phi^{1}, \phi^{2}], \tilde{\phi}]} & \leq C \cosh y \abs{\phi^{1}}\abs{\phi^{2}}\abs{\tilde{\phi}} \\ \cosh y \abs{\mathfrak{N}_{4}[\Gamma_{\mathrm{CSD}}^{(\ell_{1})}[\phi^{1}, \phi^{2}], \Gamma_{\mathrm{CSD}}^{(\ell_{2})}[\phi^{3}, \phi^{4}], \tilde{\phi}]} & \leq C \tau^{2} (\cosh y)^{3} \abs{\phi^{1}} \cdots \abs{\phi^{4}} \abs{\tilde{\phi}} \end{align*} Then by H\"older's inequality, \eqref{eq:BA:L2}, Lemma~\ref{lem:sharpLp} and Lemma~\ref{lem:BA:NZ} (only used for \eqref{eq:CSD:N4-Gmm}), it is straightforward to prove the bounds \eqref{eq:CSD:N1-Gmm}--\eqref{eq:CSD:N4-Gmm}. \qedhere \end{proof} \subsection{Improving energy and decay estimates} In this subsection, we will prove \begin{equation} \label{eq:BA:L2:improved} \begin{aligned} &\hskip-2em \sum_{0 \leq m \leq 4} \Big( \wnrm{\cosh y \bfZ^{(m)} \phi}_{L^{2}_{\tau}} + \wnrm{\bfT \bfZ^{(m)} \phi}_{L^{2}_{\tau}} \Big) + \sum_{1 \leq m \leq 4} \wnrm{\cosh y \bfN \bfZ^{(m-1)} \phi}_{L^{2}_{\tau}} \\ & \leq C_{1} (\epsilon + \epsilon_{1}^{3} \log^{m} (1+\tau)) \end{aligned} \end{equation} and \begin{align} \label{eq:BA:Linfty:improved} \wnrm{\cosh y \phi}_{L^{\infty}_{\tau}} + \wnrm{\cosh y \bfN \phi}_{L^{\infty}_{\tau}} + \wnrm{\bfT \phi}_{L^{\infty}_{\tau}} \leq C_{1} (\epsilon + \epsilon_{1}^{3}) \frac{1}{\tau} \end{align} for some constant $0 < C_{1} < \infty$. Once these estimates are proved, choosing $\epsilon_{1} = B \epsilon \leq B \delta_{\ast}(R)$ with $B = 2 C_{1}$ and taking $\delta_{\ast}(R)$ sufficiently small, \eqref{eq:BA:L2}, \eqref{eq:BA:L2:S} and \eqref{eq:BA:Linfty} would improve and hence the proof of Proposition~\ref{prop:main} would be complete. We begin with \eqref{eq:BA:L2:improved}. By the Chern--Simons equation $F = \star J$ and \eqref{eq:BA:Linfty}, for both \eqref{eq:CSH} and \eqref{eq:CSD} we have \begin{equation*} \int_{2R}^{T} \sup_{\mathcal H_{\tau}} \Big( \sum_{\mu} \abs{F(T_{\mu}, T_{0})}^{2} \Big)^{1/2} \, \mathrm{d} \tau \leq C \epsilon_{1}^{2} \int_{2R}^{\infty} \tau^{-2} \, \mathrm{d} \tau \leq C \epsilon_{1}^{2}. \end{equation*} Taking $\epsilon_{1}$ sufficiently small, we may apply the covariant energy inequality (Proposition~\ref{prop:en}) with $C_{F} = 1$ and $\tau_{0} = 2R$. Recall from \eqref{eq:BA:ini} that $\epsilon$ stands for the size of the data on $\mathcal H_{2R}$. Using also the improved nonlinearity estimate \eqref{eq:BA:KG:improved}, we obtain \eqref{eq:BA:L2:improved}. Next, we turn to proving \eqref{eq:BA:Linfty:improved}. By the pointwise estimate \eqref{eq:T-N-Z}, it suffices to establish \begin{align} \wnrm{\cosh y \phi}_{L^{\infty}_{\tau}} + \wnrm{\cosh y \bfN \phi}_{L^{\infty}_{\tau}} \leq & C_{1}' (\epsilon + \epsilon_{1}^{3}) \frac{1}{\tau}. \label{eq:BA:Linfty:improved-2} \end{align} for some constant $0 < C_{1}' < \infty$. To prove \eqref{eq:BA:Linfty:improved-2}, we now apply Proposition~\ref{prop:ODE}. The first term on the right-hand side of \eqref{eq:ODE} is bounded by $C \epsilon$ thanks to the Klainerman--Sobolev inequality. Next, using the Klainerman--Sobolev inequality and \eqref{eq:BA:L2:improved}, which we just proved, we bound the second term on the right-hand side of \eqref{eq:ODE} by \begin{align*} \sum_{1 \leq k \leq 2} \int_{2R}^{T} \frac{\cosh y}{\tau'} \wnrm{\bfZ^{k} \phi}_{L^{\infty}_{\tau'}} \, \mathrm{d} \tau' \leq & C \sum_{1 \leq k \leq 4} \sup_{\tau: 2R \leq \tau < T} \wnrm{\cosh y \bfZ^{(k)} \phi}_{L^{2}_{\tau}} \int_{2R}^{\infty} (\tau')^{-2+} \, \mathrm{d} \tau' \\ \leq & C ( \epsilon + \epsilon_{1}^{3}). \end{align*} To handle the last term in \eqref{eq:ODE}, we begin by noting that for both \eqref{eq:CSH} and \eqref{eq:CSD}, we have \begin{equation*} \abs{({}^{(A)} \Box - 1) \phi} \leq C \abs{\phi}^{2} \max \set{\abs{\phi}, \abs{\bfT \phi}}. \end{equation*} Therefore, by \eqref{eq:BA:Linfty}, we have \begin{align*} \int_{2R}^{T} \tau' \cosh y \wnrm{({}^{(A)} \Box - 1) \phi}_{L^{\infty}_{\tau'}} \, \mathrm{d} \tau' \leq & C \epsilon_{1}^{3}. \end{align*} Since the left-hand side of \eqref{eq:ODE} bounds $\tau \cosh y \abs{ \phi}$ and $\tau \cosh y\abs{\bfN \phi}$ from the above, \eqref{eq:BA:Linfty:improved-2} follows as desired. \appendix \numberwithin{equation}{section} \section{Reduced systems in the temporal and Cronstr\"om gauges} \label{app:gauge} The goal of this section is to derive reduced systems for \eqref{eq:CSH} and \eqref{eq:CSD} in the temporal and Cronstr\"om gauge, for which local well-posedness and finite speed of propagation are evident. In Section~\ref{subsec:expand}, we first expand various covariant expressions in terms of $A$ and the usual (component-wise) differential operators for vector- and Lie algebra-valued objects. Then in Sections~\ref{subsec:temporal} and \ref{subsec:cronstrom}, we exhibit reduced systems in the temporal and the Cronstr\"om gauges, respectively. \subsection{Expansion of the covariant expressions} \label{subsec:expand} We begin with expansion of various covariant expressions that arise in the Chern--Simons systems considered in this paper. \begin{lemma} \label{lem:expand} The following identities hold. \begin{align} {}^{(A)} \Box \phi = & \Box \phi + 2 \star (A \wedge \star \mathrm{d} \phi) + \delta A \phi + \star (A \wedge \star (A \phi)). \label{eq:expand:cov-Box} \\ J_{\mathrm{CSH}}(\varphi) =& 2\bbrk{\varphi \wedge \mathrm{d} \varphi} + 2 \bbrk{\varphi \wedge (A \varphi)} \label{eq:expand:J-CSH} \\ \mathrm{d} J_{\mathrm{CSH}}(\varphi) = & 2 \bbrk{\mathrm{d} \varphi \wedge \mathrm{d} \varphi} + 2 \bbrk{\mathrm{d} \varphi \wedge (A \varphi)} + 2 \star \bbrk{\varphi \wedge (J_{\mathrm{CSH}}(\varphi) \, \varphi)} \label{eq:expand:dJ-CSH} \\ & - \bbrk{\varphi \wedge ([A \wedge A] \varphi)} - 2 \bbrk{\varphi \wedge (A \wedge \mathrm{d} \varphi)} , \notag \\ J_{\mathrm{CSD}}(\varphi) =& \bbrk{\psi \wedge i \alpha \psi} \label{eq:expand:J-CSD}\\ \mathrm{d} J_{\mathrm{CSD}}(\psi) = & \bbrk{\mathrm{d} \psi \wedge i \alpha \psi} - \bbrk{\psi \wedge (i \alpha \wedge \mathrm{d} \psi)}. \label{eq:expand:dJ-CSD} \end{align} where $\Box$ denotes the usual d'Alembertian $\Box_{\mathbb R^{1+2}} = \nabla^{\mu} \nabla_{\mu}$ acting component-wisely. \end{lemma} \begin{proof} The key tool is the calculus developed in Sections~\ref{subsec:extr-calc} and \ref{subsec:extr-calc-2}, which applies in particular to the usual differential operators $\mathrm{d}$, $\calL$, $\Box$ etc. For \eqref{eq:expand:cov-Box}, we use Lemmas~\ref{lem:star}, \ref{lem:star-aux}, \ref{lem:covBox} and the identity ${}^{(A)}\ud = \mathrm{d} + A \wedge(\cdot)$ to compute \begin{align*} {}^{(A)} \Box \phi =& - \star {}^{(A)}\ud \star {}^{(A)}\ud \phi \\ =& - \star {}^{(A)}\ud \star \mathrm{d} \phi - \star {}^{(A)}\ud \star (A \phi) \\ =& - \star \mathrm{d} \star \mathrm{d} \phi - \star (A \wedge \star \mathrm{d} \phi) - \star \mathrm{d} \star (A \phi) - \star (A \wedge \star (A \phi)) \\ =& \Box \phi - 2 \star (A \wedge \star \mathrm{d} \phi) - \delta A \phi - \star(A \wedge \star(A \phi)). \end{align*} The identity \eqref{eq:expand:J-CSH} follow directly from the definitions. On the other hand, to prove \eqref{eq:expand:dJ-CSH} we use Lemma~\ref{lem:extr-calc-bbrk} and the Leibniz rule for $\mathrm{d}$ to compute \begin{align*} \mathrm{d} J_{\mathrm{CSH}}(\varphi) = & 2 \mathrm{d} \bbrk{\varphi \wedge {}^{(A)}\ud \varphi} \\ = & 2 \bbrk{\mathrm{d} \varphi \wedge \mathrm{d} \varphi} + 2 \bbrk{\mathrm{d} \varphi \wedge (A \wedge \varphi)} + 2 \bbrk{\varphi \wedge \mathrm{d} (A \wedge \varphi)} \\ =& 2 \bbrk{\mathrm{d} \varphi \wedge \mathrm{d} \varphi} + 2 \bbrk{\mathrm{d} \varphi \wedge (A \wedge \varphi)} + 2 \bbrk{\varphi \wedge (\mathrm{d} A \wedge \varphi)} - 2 \bbrk{\varphi \wedge (A \wedge \mathrm{d} \varphi)} \\ =& 2 \bbrk{\mathrm{d} \varphi \wedge \mathrm{d} \varphi} + 2 \bbrk{\mathrm{d} \varphi \wedge (A \wedge \varphi)} + 2 \bbrk{\varphi \wedge (F \wedge \varphi)} \\ & - \bbrk{\varphi \wedge ([A \wedge A] \wedge \varphi)} - 2 \bbrk{\varphi \wedge (A \wedge \mathrm{d} \varphi)}. \end{align*} Then by the Chern--Simons equation $F = \star J_{\mathrm{CSH}}$, \eqref{eq:expand:dJ-CSH} follows. Finally, \eqref{eq:expand:J-CSD} and \eqref{eq:expand:dJ-CSD} are straightforward consequences of the definitions; we remark that $\mathrm{d} \alpha = 0$ is used for the latter, which is clear in the rectilinear coordinates $(t, x^{1}, x^{2})$. \end{proof} \subsection{Reduced system in the temporal gauge} \label{subsec:temporal} In the temporal gauge, the system \eqref{eq:CS-uni} is equivalent to the following system, for which local well-posedness and finite speed of propagation is rather immediate. \begin{lemma} \label{lem:reduce:temp} Let $I$ be a connected interval, and let $(A, \phi)$ be a pair of (smooth) connection 1-form and $V$-valued function on $I \times \mathbb R^{2} \subseteq \mathbb R^{1+2}$ which obeys the temporal gauge condition $\iota_{\partial_{t}} A = 0$. Then $(A, \phi)$ solves \eqref{eq:CS-uni} on $I \times \mathbb R^{2}$ if and only if it solves the reduced system \begin{equation} \label{eq:reduce:temp} \left\{ \begin{aligned} (\Box - 1) \phi =& 2 \star (A \wedge \star \mathrm{d} \phi) + b \phi + \star(A \wedge \star(A \phi)) + U(\phi) \\ \mathcal L_{\partial_{t}} A = & \star (J \wedge \mathrm{d} t) \\ \mathcal L_{\partial_{t}} b = & - \star (\mathrm{d} J \wedge \mathrm{d} t) \end{aligned} \right. \end{equation} and obeys the constraints \begin{equation} \label{eq:reduce:temp-c} (F - \star J) \restriction_{\Sigma_{t_{0}}}= 0, \qquad (\delta A - b) \restriction_{\Sigma_{t_{0}}} = 0, \end{equation} on $\Sigma_{t_{0}} = \set{t = t_{0}}$ for some $t_{0} \in I$. \end{lemma} Here, the notation $(F - \star J) \restriction_{\Sigma_{t_{0}}}$ refers to the restriction (or pullback) of the 2-form $F - \star J$ to $\Sigma_{t_{0}}$; in coordinates, \begin{equation*} (F - \star J) \restriction_{\Sigma_{t_{0}}} = (F - \star J)_{12} \, \mathrm{d} x^{1} \wedge \mathrm{d} x^{2}. \end{equation*} We also remind the reader that $J$ and $\mathrm{d} J$ were computed in Lemma~\ref{lem:expand}. As it will be clear from the proof below, the system \eqref{eq:CS-uni} is in fact already equivalent to the first two equations of \eqref{eq:reduce:temp} if we take $b = \delta A$. The reason for introducing the auxiliary variable $b$ and the third equation is to exploit the fact that $\delta A$ obeys a `better' transport equation than a general derivative of $A$. In particular, in the case of \eqref{eq:CSH} we only have at most one derivative of $\varphi$ on the right-hand side of $\mathcal L_{\partial_{t}} \delta A$; in general, we expect to see two derivatives from differentiating $\mathcal L_{\partial_{t}} A = \star (J_{\mathrm{CSH}} \wedge \mathrm{d} t)$. This observation is crucial for establishing local well-posedness of \eqref{eq:CSH} in the temporal gauge, since $\delta A$ appears on the right-hand side of the Klein--Gordon equation for $\phi$, and the latter equation only gains one derivative. \begin{proof} [Proof of Lemma~\ref{lem:reduce:temp}] First, we claim that if $(A, \phi)$ is a solution to \eqref{eq:CS-uni}, then \eqref{eq:reduce:temp} and \eqref{eq:reduce:temp-c} are satisfied with $b = \delta A$. Indeed, by Lemma~\ref{lem:expand}, the equation $({}^{(A)} \Box - 1) \phi = U(\phi)$ is equivalent to the first equation of \eqref{eq:reduce:temp} with $b = \delta A$. The second equation follows from $F = \star J$ by taking $\iota_{\partial_{t}}$ and using Cartan's formula \eqref{eq:cartan-eq}. Finally, taking $\delta$ of the second equation and using Lemma~\ref{lem:star}, we have \begin{equation} \label{eq:reduce:temp-dltA} \calL_{\partial_{t}} \delta A = \delta \mathcal L_{\partial_{t}} A = \star \mathrm{d} \star \star (J \wedge \mathrm{d} t) = - \star (\mathrm{d} J \wedge \mathrm{d} t). \end{equation} Here, we have crucially used the fact that $\partial_{t}$ is Killing to commute $\mathcal L_{\partial_{t}}$ with $\delta$. This equation implies the third equation of \eqref{eq:reduce:temp}. To conclude the proof, it remains to show that a solution to \eqref{eq:reduce:temp} and \eqref{eq:reduce:temp-c} also solves \eqref{eq:CS-uni}. As a first step, we observe that the constraints \eqref{eq:reduce:temp-c} are propagated by \eqref{eq:reduce:temp}. Indeed, for the first equation of \eqref{eq:reduce:temp-c}, we have \begin{align*} {}^{(A)}\calL_{\partial_{t}} F =& (\calL_{\partial_{t}} + \iota_{\partial_{t}} A) (\mathrm{d} A + \frac{1}{2} [ A \wedge A]) \\ =& \mathrm{d} (\iota_{\partial_{t}} \star J) + [A \wedge (\iota_{\partial_{t}} \star J)] \\ = & {}^{(A)}\ud \iota_{\partial_{t}} \star J = {}^{(A)}\calL_{\partial_{t}} \star J - \iota_{\partial_{t}} {}^{(A)}\ud \star J. \end{align*} Note that the last term vanishes, since ${}^{(A)}\dlt J = \star {}^{(A)}\ud \star J = 0$ for both $J = J_{\mathrm{CSH}}$ and $J_{\mathrm{CSD}}$. Hence ${}^{(A)}\calL_{\partial_{t}}(F - \star J) = 0$, which along with \eqref{eq:reduce:temp-c} implies that $(F - \star J) \restriction_{\Sigma_{t}} = 0$ for every $t \in I$. Next, by \eqref{eq:reduce:temp-dltA} we have $\mathcal L_{\partial_{t}} (\delta A - b) = 0$, which shows that $b = \delta A$ for every $t \in I$ as well. We are now ready to show that $(A, \phi)$ solves \eqref{eq:CS-uni}. As we have just seen, if \eqref{eq:reduce:temp} and \eqref{eq:reduce:temp-c} hold, then $(F - \star J) \restriction_{\Sigma_{t}} = 0$ for every $t \in I$, i.e., the tangential components of Chern--Simons equation holds. On the other hand, the remaining components $\iota_{\partial_{t}} (F - \star J)$ are precisely the second equation of \eqref{eq:reduce:temp}. Finally, since $b = \delta A$, it follows from Lemma~\ref{lem:expand} that the covariant Klein--Gordon equation holds as well. \qedhere \end{proof} \subsection{Reduced system in the Cronstr\"om gauge} \label{subsec:cronstrom} In the Cronstr\"om gauge, we have the following analogue of Lemma~\ref{lem:reduce:temp}. \begin{lemma} \label{lem:reduce:cron} Let $I$ be a connected interval, and let $(A, \phi)$ be a pair of (smooth) connection 1-form and $V$-valued function on $\set{\tau \in I} \subseteq \mathbb R^{1+2}$ which obeys the Cronstr\"om gauge condition $\iota_{\partial_{\tau}} A = 0$. Then $(A, \phi)$ solves \eqref{eq:CS-uni} on $\set{\tau \in I}$ if and only if it solves the reduced system \begin{equation} \label{eq:reduce:cron} \left\{ \begin{aligned} (\Box - 1) \phi =& 2 \star (A \wedge \star \mathrm{d} \phi) + b \phi + \star(A \wedge \star(A \phi)) + U(\phi) \\ \mathcal L_{\partial_{\tau}} A = & \star (J \wedge \mathrm{d} \tau) \\ \Big( \mathcal L_{\partial_{\tau}} + \frac{2}{\tau} \Big) b = & - \star (\mathrm{d} J \wedge \mathrm{d} \tau) \end{aligned} \right. \end{equation} and obeys the constraints \begin{equation} \label{eq:reduce:cron-c} (F - \star J) \restriction_{\mathcal H_{\tau_{0}}}= 0, \qquad (\delta A - b) \restriction_{\mathcal H_{\tau_{0}}} = 0, \end{equation} on $\mathcal H_{\tau_{0}}$ for some $\tau \in I$. \end{lemma} \begin{proof} We only sketch the proof of the following analogue of \eqref{eq:reduce:temp-dltA}: \begin{equation} \label{eq:reduce:cron-dltA} \Big( \calL_{\partial_{\tau}} + \frac{2}{\tau} \Big) \delta A = - \star (\mathrm{d} J \wedge \mathrm{d} \tau), \end{equation} since rest of the proof is analogous to the temporal gauge case (Lemma~\ref{lem:reduce:temp}). In Lemma~\ref{eq:reduce:temp-dltA}, commutation of $\mathcal L_{\partial_{t}}$ and $\star$ was simple due to the fact that $\partial_{t}$ is Killing. In the present case, $\partial_{\tau}$ is \emph{not} Killing; however, we may exploit the fact that $S = \tau \partial_{\tau}$ is \emph{conformally Killing}. Indeed, on the $(1+d)$-dimensional Minkowski space, the scaling vector field $S$ obeys the identities \begin{align*} \calL_{S} \eta = 2 \eta, \quad \calL_{S} \eta^{-1} = - 2 \eta^{-1}, \quad \calL_{S} \epsilon = (1+d) \epsilon. \end{align*} In our case, $1+d = 3$. Given two real-valued $k$-forms $\omega^{1}$ and $\omega^{2}$, we have \begin{equation*} \calL_{S} \Big( \eta^{-1}(\omega^{1}, \omega^{2}) \epsilon \Big) = \eta^{-1}(\calL_{S} \omega^{1}, \omega^{2}) \epsilon + \eta^{-1}(\omega^{1}, \calL_{S} \omega^{2}) \epsilon + (3-2k) \eta^{-1}(\omega^{1}, \omega^{2}) \epsilon. \end{equation*} Recalling the characterization \eqref{eq:star-def} of $\star$, it follows that \begin{equation*} \calL_{S} \star \omega = \star \calL_{S} \omega + (3-2k) \star \omega. \end{equation*} We now begin the proof in earnest. Assume that the second equation of \eqref{eq:reduce:cron} holds. Computing component-wisely for a $\mathfrak{g}$-valued 1-form $A$, we have \begin{align*} \delta \calL_{S} A = & \star \mathrm{d} \star \calL_{S} A \\ = & \calL_{S} \star \mathrm{d} \star A - (3 - 2 \cdot 3) \star \mathrm{d} \star A - (3 - 2) \star \mathrm{d} \star A = (\calL_{S} + 2) \delta A. \end{align*} By Cartan's formula \eqref{eq:cartan-eq} and the fact that $\iota_{S} A = \tau \iota_{\partial_{\tau}} A = 0$, the left-hand side equals \begin{equation*} \delta (\star J \wedge S^{\flat}) = - \star (\mathrm{d} J \wedge S^{\flat}) + \star(J \wedge \mathrm{d} S^{\flat}). \end{equation*} Note that $S^{\flat} = \tau \mathrm{d} \tau = \frac{1}{2} \mathrm{d} \tau^{2}$, hence $\mathrm{d} S^{\flat} = \frac{1}{2} \mathrm{d}^{2} \tau^{2} = 0$. It follows that \begin{equation*} \tau \Big( \calL_{\partial_{\tau}} + \frac{2}{\tau} \Big) \delta A = (\calL_{S} + 2) \delta A = - \tau \star (\mathrm{d} J \wedge \mathrm{d} \tau). \end{equation*} Dividing by $\tau > 0$, \eqref{eq:reduce:cron-dltA} follows. \qedhere \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
arXiv
p-adic modular form In mathematics, a p-adic modular form is a p-adic analog of a modular form, with coefficients that are p-adic numbers rather than complex numbers. Serre (1973) introduced p-adic modular forms as limits of ordinary modular forms, and Katz (1973) shortly afterwards gave a geometric and more general definition. Katz's p-adic modular forms include as special cases classical p-adic modular forms, which are more or less p-adic linear combinations of the usual "classical" modular forms, and overconvergent p-adic modular forms, which in turn include Hida's ordinary modular forms as special cases. Serre's definition Serre defined a p-adic modular form to be a formal power series with p-adic coefficients that is a p-adic limit of classical modular forms with integer coefficients. The weights of these classical modular forms need not be the same; in fact, if they are then the p-adic modular form is nothing more than a linear combination of classical modular forms. In general the weight of a p-adic modular form is a p-adic number, given by the limit of the weights of the classical modular forms (in fact a slight refinement gives a weight in Zp×Z/(p–1)Z). The p-adic modular forms defined by Serre are special cases of those defined by Katz. Katz's definition A classical modular form of weight k can be thought of roughly as a function f from pairs (E,ω) of a complex elliptic curve with a holomorphic 1-form ω to complex numbers, such that f(E,λω) = λ−kf(E,ω), and satisfying some additional conditions such as being holomorphic in some sense. Katz's definition of a p-adic modular form is similar, except that E is now an elliptic curve over some algebra R (with p nilpotent) over the ring of integers R0 of a finite extension of the p-adic numbers, such that E is not supersingular, in the sense that the Eisenstein series Ep–1 is invertible at (E,ω). The p-adic modular form f now takes values in R rather than in the complex numbers. The p-adic modular form also has to satisfy some other conditions analogous to the condition that a classical modular form should be holomorphic. Overconvergent forms Main article: Overconvergent form Overconvergent p-adic modular forms are similar to the modular forms defined by Katz, except that the form has to be defined on a larger collection of elliptic curves. Roughly speaking, the value of the Eisenstein series Ek–1 on the form is no longer required to be invertible, but can be a smaller element of R. Informally the series for the modular form converges on this larger collection of elliptic curves, hence the name "overconvergent". References • Coleman, Robert F. (1996), "Classical and overconvergent modular forms", Inventiones Mathematicae, 124 (1): 215–241, doi:10.1007/s002220050051, ISSN 0020-9910, MR 1369416, S2CID 7995580 • Gouvêa, Fernando Q. (1988), Arithmetic of p-adic modular forms, Lecture Notes in Mathematics, vol. 1304, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0082111, ISBN 978-3-540-18946-6, MR 1027593 • Hida, Haruzo (2004), p-adic automorphic forms on Shimura varieties, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-20711-7, MR 2055355 • Katz, Nicholas M. (1973), "p-adic properties of modular schemes and modular forms", Modular functions of one variable, III (Proc. Internat. Summer School, Univ. Antwerp, Antwerp, 1972), Lecture Notes in Mathematics, vol. 350, Berlin, New York: Springer-Verlag, pp. 69–190, doi:10.1007/978-3-540-37802-0_3, ISBN 978-3-540-06483-1, MR 0447119 • Serre, Jean-Pierre (1973), "Formes modulaires et fonctions zêta p-adiques", Modular functions of one variable, III (Proc. Internat. Summer School, Univ. Antwerp, 1972), Lecture Notes in Math., vol. 350, Berlin, New York: Springer-Verlag, pp. 191–268, doi:10.1007/978-3-540-37802-0_4, ISBN 978-3-540-06483-1, 0404145
Wikipedia
Analysis of Bayesian posterior significance and effect size indices for the two-sample t-test to support reproducible medical research Riko Kelter1 The replication crisis hit the medical sciences about a decade ago, but today still most of the flaws inherent in null hypothesis significance testing (NHST) have not been solved. While the drawbacks of p-values have been detailed in endless venues, for clinical research, only a few attractive alternatives have been proposed to replace p-values and NHST. Bayesian methods are one of them, and they are gaining increasing attention in medical research, as some of their advantages include the description of model parameters in terms of probability, as well as the incorporation of prior information in contrast to the frequentist framework. While Bayesian methods are not the only remedy to the situation, there is an increasing agreement that they are an essential way to avoid common misconceptions and false interpretation of study results. The requirements necessary for applying Bayesian statistics have transitioned from detailed programming knowledge into simple point-and-click programs like JASP. Still, the multitude of Bayesian significance and effect measures which contrast the gold standard of significance in medical research, the p-value, causes a lack of agreement on which measure to report. Therefore, in this paper, we conduct an extensive simulation study to compare common Bayesian significance and effect measures which can be obtained from a posterior distribution. In it, we analyse the behaviour of these measures for one of the most important statistical procedures in medical research and in particular clinical trials, the two-sample Student's (and Welch's) t-test. The results show that some measures cannot state evidence for both the null and the alternative. While the different indices behave similarly regarding increasing sample size and noise, the prior modelling influences the obtained results and extreme priors allow for cherry-picking similar to p-hacking in the frequentist paradigm. The indices behave quite differently regarding their ability to control the type I error rates and regarding their ability to detect an existing effect. Based on the results, two of the commonly used indices can be recommended for more widespread use in clinical and biomedical research, as they improve the type I error control compared to the classic two-sample t-test and enjoy multiple other desirable properties. In randomised clinical trials (RCT), the two-sample Student's and Welch's t-test is one of the most popular statistical procedures conducted. The goal often can be defined to test the efficacy of a new treatment or medication and investigate the size of an effect. Common settings use a treatment and control group, and the goal is to measure differences in a response variable like blood pressure. The gold standard in medical research for deciding if a new treatment or drug was more effective than the control treatment or drug is the p-value. The p-value states if the researcher can deem the observed difference significant, that means unlikely to have occurred under the assumption of the null hypothesis. The dominance of p-values when comparing two groups in medical (and other) research is overwhelming: Nuijten et al. [1] showed in a meta-analysis that of 258105 p-values reported in journals between 1985 and 2013, 26% belonged to a t-statistic, see also Wetzels et al. [2]. In its most restricted setting, the two-sample Student's t-test assumes normally distributed data with identical variances, that is \(Y_{1i}\sim \mathcal {N}(\mu _{1},\sigma ^{2}), Y_{2j}\sim \mathcal {N}(\mu _{2},\sigma ^{2})\) and tests the null hypothesis of no difference at all, that is H0:μ2=μ1, assuming equal sample sizes \(i,j=1,...,n, n\in \mathbb {N}\). Removing the restriction for homoscedasticity – which is the assumption of identical variances \(\sigma _{1}^{2} = \sigma _{2}^{2}\) in both groups – and the assumption of identical sample sizes i=j, the setting leads to the well known Behrens-Fisher-problem, which remains unsolved until today. The typical practice is to proceed with an approximative solution, known as Welch's two-sample t-test. These approximative solutions are quite reliable, but as frequentist testing makes use of sampling statistics, which only allow rejecting the null hypothesis via the use of p-values, confirming any research hypothesis is not possible. The general procedure of null hypothesis significance testing (NHST), which uses sampling statistics to reject a null hypothesis via p-values makes formulating any reasonable research hypothesis complicated, as the research hypothesis first has to be rephrased in the form of a rejectable null hypothesis. In some cases, this is not possible at all, further limiting the usefulness of NHST in applied research. Countless papers have criticised the misuse and abuse of p-values in particular in medical research, and official statements of the American Statistical Association (ASA) in 2016 and 2019 by Wasserstein & Lazar [3] and Wasserstein et al. [4] make clear that tensions have not relaxed by now. The current practice shows that the p-value as a measure of significance is still widely used and resilient to the repeated criticism [5], while being prone to overestimating effects, stating effects if none exist in reality, and false interpretation by scientists [6]. This problem is especially observed in clinical research, see Ioannidis [7]. Among the proposed solutions to the problems of NHST is a shift to Bayesian statistics [4]. It is commonly agreed on that a more widespread use of Bayesian methods can at least partially improve the reliability in medical research on a statistical basis [8–10]. Recently, the development of Bayesian counterparts to frequently used statistical tests in medical and social science – including Student's and Welch's two-sample t-test – has opened up new possibilities for researchers: Open-source programs like JASP (https://jasp-stats.org) implement a broad spectrum of Bayesian methods and make them available to a wide range of researchers via a simple point-and-click user interface similar to SPSS. Given the general recommendation of a shift towards the Bayesian paradigm, it is sensible to ask what benefits come with this shift. While NHST focusses on hypothesis testing via p-values and stating the significance of an observed effect, the Bayesian philosophy proceeds by the formulation of a statistical model, the inclusion of available prior information into the analysis, and the derivation of the posterior distribution of the parameters of interest, for example, the effect size in the setting of Student's two-sample t-test. Employing the posterior distribution instead of point estimates, the Bayesian philosophy fosters estimation under uncertainty directly in contrast to NHST, which commonly uses point estimates like maximum likelihood estimates with confidence intervals, which are often interpreted wrong. In NHST, testing for the significance of an effect is the standard approach, but the significance of an effect does not imply that the discovered relationship is also scientifically meaningful. It only means that the observed effect is unlikely to be observed under the assumption of the null hypothesis, no matter how large or small it is. Also, a non-significant result does not indicate that the null hypothesis is correct, and together these drawbacks of NHST can be seen as the reason why multiple measures of significance and magnitude of an effect based on the posterior distribution have been proposed in the Bayesian literature. In the Bayesian paradigm, inferences about the parameters of interest are drawn from the posterior distribution, and testing is optional. In practice, drawing conclusions from the posterior distribution is achieved by using different posterior indices. There are measures which state the significance of an effect, and measures which also gauge the size of it. Among them is the Bayes factor introduced by Jeffreys [11], the region of practical equivalence (ROPE) championed by Kruschke [12], the probability of direction (PD) as detailed in Makowski et al. [13], the MAP-based p-value proposed by Mills [14], and the Full Bayesian Significance Test (FBST) featuring the e-value, which was introduced by Pereira, Stern and Wechsler [15, 16]. The appropriateness of these indices is still debated in the literature, which makes it challenging to choose among the available indices because by now there is no explicit agreement on which index researchers should use to report the results of a Bayesian analysis [10, 17–19]. What is missing are specific investigations which of the available measures of significance and effect size are appropriate for a specific statistical method like the two-sample Student's and Welch's t-test. The results of such studies could guide scientists in the selection of an appropriate index to assess the result of a two-sample Student's or Welch's t-test performed in the analysis of clinical trial data. In order to provide such guidance, this paper investigates the behaviour of common Bayesian posterior indices for the presence and size of an effect in the setting of the two-sample Student's and Welch's t-test. Indices of significance and magnitude of an observed effect In this section, we briefly review the existing Bayesian indices of significance and magnitude of an observed effect. Reviewing the most commonly used indices will serve as a firm understanding of the simulation study reported later in this paper, and also enhance a critical reflection on each of the indices. The Bayes factor (BF) The oldest and still widely used index is the Bayes factor (BF). Bayesian hypothesis testing often is associated with the Bayes factor BF01, the predictive updating factor which measures the change in relative beliefs about both hypotheses H0 and H1 given the data x: $$\begin{array}{*{20}l} \underbrace{\frac{\mathbb{P}(H_{0}|x)}{\mathbb{P}(H_{1}|x)}}_{\text{Posterior odds}} =\underbrace{\frac{p(x|H_{0})}{p(x|H_{1})}}_{BF_{01}(x)}\cdot \underbrace{\frac{\mathbb{P}(H_{0})}{\mathbb{P}(H_{1})}}_{\text{Prior odds}} \end{array} $$ The Bayes factor BF01 can be rewritten as the ratio of the two marginal likelihoods of both models, which is calculated by integrating out the respective model parameters according to the prior distribution of the parameters. Generally, the calculation of these marginals can be complex for non-trivial models. In the setting of the two-sample Student's t-test, the Bayes factor is used for testing a null hypothesis H0:δ=0 of no effect against a one- or two-sided alternative H1:δ>0,H1:δ<0 or H1:δ≠0, where δ=(μ1−μ2)/σ is the effect size according to Cohen [20, p. 20], under the assumption of two independent samples and identical standard deviation σ in each group. An often lamented problem with Bayes factors as detailed in Kamary et al. [21] and Robert [17] is the dependence on the prior distributions assigned to the model parameters. Nevertheless, the Bayes factor has deep roots in Bayesian thinking and is one of the most widely used measures for hypothesis testing. Over the years, several authors including Jeffreys [11], Kass and Raftery [22] or Van Doorn et al. [23] have offered thresholds for interpreting different values of it. For example, according to Van Doorn et al. [23], a Bayes factor BF10>3 can be interpreted as moderate evidence for the alternative H1 relative to the null hypothesis H0, and a Bayes factor BF10>10 can be interpreted as strong evidence in the same way. Note that the Bayes factor BF10 can be obtained by inverting BF01 in equation (1), that is: BF10=p(x|H1)/p(x|H0)=1/BF01. So, if for example BF01=4 states moderate evidence for the null hypothesis H0:δ=0, then BF10=1/BF01 is obtained as 1/4 for the alternative hypothesis H1:δ≠0. The region of practical equivalence (ROPE) The region of practical equivalence was championed by Kruschke [24], who stresses that such a region is often observed in different scientific domains under different names "such as indifference zone, range of equivalence, equivalence margin, margin of noninferiority, smallest effect size of interest, and good-enough belt" Kruschke [19, p. 272]. The essential idea is that in applied research, parameter values can often be termed practically equivalent if they lie in a given range. Starting from the posterior distribution of the parameter of interest, researchers should interpret values inside the region of practical equivalence (ROPE) as equivalent. For example, when conducting a clinical trial which compares the weight in kilograms of patients in two groups, one could define that the difference of means μ2−μ1 is practically equivalent to zero if it lies inside the ROPE [−1,1]. That means a difference of only one kilogram is interpreted as practically equivalent to zero. If the posterior distribution of μ2−μ1 now is entirely located inside the ROPE, the difference μ2−μ1 is interpreted as practically equivalent to zero a posteriori. On the other hand, if the total probability mass of the posterior distribution μ2−μ1 is located outside the ROPE, the null hypothesis μ2=μ1 of no difference can be rejected. The same procedure can be applied to any parameter, θ of interest. If the probability mass of the posterior lies partially inside and outside the ROPE, the situation is inconclusive. There are two versions of the ROPE, one in which the 95% Highest-Posterior-Density-Interval (HPD) is used for the analysis (95% ROPE), and one in which the full posterior distribution is used (full ROPE). For the effect size δ, Kruschke [24] proposed to use [−0.1,0.1] as the ROPE for the null hypothesis H0:δ=0 of no effect, which is half of the effect size necessary for at least a small effect according to Cohen [20] (a small effect is defined as 0.2≤δ<0.5 or −0.5<δ≤−0.2 according to Cohen [20]). The probability of direction (PD) The probability of direction is detailed in Makowski et al. [13] and varies between 50% and 100%. It is defined as the proportion of the posterior distribution of the parameter that is of the median's sign. Therefore, if the posterior distribution assigns probability mass to both positive and negative parameter values, and the median is positive, it is the percentage of the posterior distributions probability mass located on the positive real numbers (0,∞). The MAP-based p-value The MAP-based p-value was proposed by Mills [14] (see also Makowski et al. [13]), and can be related to the odds that a parameter has against the null hypothesis: It is defined as the ratio of the posterior density at the null value and the value of the posterior density at the maximum a posteriori (MAP) value, which is the equivalent of the mode for continuous probability distributions. The e-value and the full Bayesian significance test (FBST) The Full Bayesian Significance Test (FBST) was originally developed by Pereira and Stern [15] and created under the assumption that a significance test of a sharp hypothesis had to be conducted. A sharp hypothesis refers to any submanifold of the parameter space of interest, see [16], which includes for example point hypotheses like H0:δ=0. Considering a standard parametric statistical model, where \(\theta \in \Theta \subseteq \mathbb {R}^{p}\) is a (vector) parameter of interest, p(x|θ) is the likelihood function associated to the observed data x, and p(θ) is the prior distribution of θ, the posterior distribution p(θ|x) is proportional to the product of the likelihood and prior density: $$\begin{array}{*{20}l} p(\theta |x) \propto p(x|\theta)p(\theta) \end{array} $$ A hypothesis H makes the statement that the parameter θ lies in the corresponding null set ΘH then. Following [25] in notation, the Full Bayesian Significance Test (FBST) then defines two quantities: ev (H), which is the e-value supporting (or in favour of) the hypothesis H, and \(\overline {\text {ev}}(H)\), the e-value against H, also called the Bayesian evidence value against H, see Pereira and Stern [15]. First, the posterior surprise functions(θ) and its maximum s∗ restricted to the null set ΘH are denoted as $$\begin{array}{*{20}l} s(\theta):=\frac{p(\theta|x)}{r(\theta)}, \hspace{1cm} s^{*}:=s(\theta^{*})=\sup\limits_{\theta \in \Theta_{H}}s(\theta) \end{array} $$ In the definition of the posterior surprise function s(θ), the denumerator r(θ) is a reference density. If the improper flat prior r(θ)∝1 is used, the surprise function becomes the posterior distribution p(θ|x). Otherwise, a noninformative prior distribution can be used as a reference density, see Stern [25]. The next step towards the e-value is to define $$\begin{array}{*{20}l} T(\nu):=\{\theta \in \Theta|s(\theta)\leq \nu \}, \hspace{0.5cm} \bar{T}(\nu):=\Theta \setminus T(\nu) \end{array} $$ and \(\overline {T}(s^{*})\) is then called the tangential set to the hypothesis H, which contains the points of the parameter space with higher surprise (relative to the reference density r(θ)) than any point in the null set ΘH. Integrating the posterior p(θ|x) over this set can be interpreted as the Bayesian evidence against H, the e-value \(\overline {\text {ev}}(H)\): $$\begin{array}{*{20}l} \overline{\text{ev}}(H):=\overline{W}(s^{*}), \hspace{0.5cm} W(\nu):=\int_{T(\nu)}p(\theta|x)d\theta \end{array} $$ Of course the e-value ev (H) supporting H is obtained as ev\((H):=1-\overline {\text {ev}}(H)\). In the above, W(ν) is called the cumulative surprise function, and \(\overline {W}(\nu):=1-W(\nu)\). Therefore, large values of \(\overline {\text {ev}}(H)\) indicate that the hypothesis H traverses low-density regions (or equivalently, that the alternative hypothesis traverses high-density regions) so that the evidence against H is large. The theoretical properties of the FBST and the e-value(s) have been detailed in Pereira and Stern [16] and Stern [25]. Here, we focus on the behaviour of the e-value \(\overline {\text {ev}}(H)\) against H:δ=0 in the context of the Bayesian two-sample t-test. Note that one can use ev (H) to reject H if ev (H) is sufficiently small (or when \(\overline {\text {ev}}(H)\) is large), but not to confirm H, which may be seen as a drawback of the FBST. Note also that there exist asymptotic arguments using the distribution of ev (H), which make it possible to obtain critical values based on this distribution to reject a hypothesis H, similar to p-values in NHST. In the simulation study reported later, we do not make use of any asymptotic argument and solely report the e-value \(\overline {\text {ev}}(H)\) against H. Makowski et al. [13] also proposed the Bayes factor versus ROPE index, which does not compare the point null hypothesis H0:δ=0 against an alternative H1:δ≠0 as the normal BF, but used a null H0:δ∈[−0.1,0.1] which is given by the ROPE and then tests against the alternative H1:δ∉[−0.1,0.1] which is the complement to the ROPE. While this approach is highly similar to the traditional ROPE and shows similar behaviour indeed [13], it will not be used here. Also, the frequentist p-value is used as a reference index, which is the probability under the null hypothesis, to obtain a result equal to or more extreme than the one observed for the statistical model used, see Wasserstein & Lazar [3]. Figures 1 and 2 show the different posterior Bayesian indices for significance and size of an effect for a Bayesian two-sample t-test. Group one was simulated as \(\mathcal {N}(0.5,1)\) and group two as \(\mathcal {N}(2,1)\) each with n=10 samples and the true effect size is δ=−1.5. The FBST is visualized in Fig. 1, where the left plot shows a Cauchy prior C(0,1) (dashed line) and the resulting posterior p(δ|x) (solid black line), which is obtained by the Bayesian two-sample t-test of Rouder et al. [26]. s∗ is computed as s(0)=0.1103 (indicated by the blue point) and the integral W(0) over the set T(0) is shown as the red area under the posterior. This area is ev (H), which is 0.0418 in this case. The blue area corresponds to the integral \(\overline {W}(0)\) over the set \(\overline {T}(0)\), which consists of all parameter values δ attaining a posterior density p(δ|x) larger than p(0)=0.1103, indicated by the horizontal dashed blue line. The value of this integral is the evidence against \(H_{0}:\delta =0, \overline {\text {ev}}(H)=0.9582\), which advises the researcher to reject H0:δ=0 if a threshold of \(\overline {\text {ev}}(H)>0.95\) is used for making a decision in light of the obtained evidence. The right plot in Fig. 1 shows the same situation, but now the reference prior r(δ) used in the surprise function has been changed from the improper flat prior r(δ)∝1 to the wide Cauchy prior C(0,1) actually used when conducting the Bayesian two-sample t-test of Rouder et al. [26]. Therefore, the surprise function values differ (see the scaling of the y-axis) and values of p(δ|x)/p(δ)>1 indicate that the posterior p(δ|x) assigns a larger probability to a given parameter value than the prior p(δ). This can be interpreted as the data having increased this parameters probability. Visualization of the Full Bayesian Significance Test. The e-value and FBST using a flat reference prior r(δ)∝1 (left) and wide Cauchy reference prior C(0,1) (right) against H0 for the Bayesian two-sample t-test; the blue area indicates the integral over the tangential set \(\overline {T}(0)\) against H0:δ=0, which is the e-value \(\overline {\text {ev}}\) against H0; the red area is the integral over T(0), which is the e-value ev (H) in favour of H0:δ=0 Visualization of Bayesian posterior indices. Different Bayesian posterior indices for significance and size of an effect for a Bayesian two-sample t-test The Bayes factor BF10 of H0:δ=0 against H1:δ≠0 is shown in the upper left plot of Fig. 2 and can be interpreted as the ratio of the prior density at the point-null value δ0=0 visualised as the grey lollipop and the posterior density at the point-null value δ0=0 visualised as the red lollipop. After observing the data, H0 becomes less probable, which is reflected in the Bayes factor of BF10=3.38. This magnitude indicates only moderate evidence for H1, which is due to the small sample size of n=10. Note that the Bayes factor BF01 can be obtained by inverting the ratio. The MAP-based p-value is shown in the upper right plot and is defined as the ratio of the height of the posterior density at the null value δ0=0 and the MAP-value δMAP, the maximum a posteriori parameter. As can be seen, the MAP estimate is near δ=−1, indicating a clear shift away from the null hypothesis. Still, the MAP-based p-value is given as pMAP=0.203, which is not significant. The lower left plot visualises the 95% and full ROPE, where the ROPE is defined as [−0.1,0.1], following the recommendations of Kruschke [27]. 2.38% probability mass of the posterior distribution is located inside the ROPE when using the 95% ROPE and 3.00% is located inside the ROPE when using the full ROPE. In a test of practical equivalence, where the null is only rejected if the posterior is located entirely outside the ROPE, the null hypothesis H0 cannot be rejected based on the ROPE. Still, if an estimation-oriented perspective is used, avoiding the classical testing stance, the ROPE-analysis shows evidence for the alternative H1 for both the 95% and full ROPE. The lower right plot in Fig. 2 shows the probability of direction (PD). It enjoys some desirable properties: First, it clearly shows that the effect is more likely to be of negative than positive sign, as 97.70% of the posterior is located on the negative real numbers. Also, the PD embraces estimation under uncertainty instead of hypothesis testing, in the same way as the ROPE does when avoiding an explicit testing stance. The posterior distribution can then be used in a second step to obtain, for example, the mean and standard deviation as estimates for the parameter. Still, hypothesis testing is also possible via rejecting the null H0:δ≥0 if at least 95% of the posterior of δ is located on the negative real axis. A simulation study was performed to analyse the behaviour of the different measures in the setting of Welch's two-sample t-test. Pairs of data were simulated, consisting of two samples, one for each group, each normally distributed. Four settings were selected: In the first, no effect was present, and both groups were identically distributed as standard normal \(\mathcal {N}(0,1)\). In the second, a small effect was present, and the first group was simulated as \(\mathcal {N}(2.89,1.84)\) and the second as \(\mathcal {N}(3.5,1.56)\), resulting in a true effect size of $$\begin{array}{*{20}l} \delta=\frac{(2.89-3.5)}{\sqrt{((1.84^{2}+1.56^{2})/2)}}\approx -0.357 \end{array} $$ In the third simulation setting, a medium effect was present. The first group was simulated as \(\mathcal {N}(254.08,2.36)\) and the second as \(\mathcal {N}(255.84,3.04)\), resulting in a true effect size of $$\begin{array}{*{20}l} \delta=\frac{(254.08-255.84)}{\sqrt{((2.36^{2}+3.04^{2})/2)}}\approx -0.646 \end{array} $$ The last setting used \(\mathcal {N}(15.01,3.4)\) and \(\mathcal {N}(19.91,5.8)\) distributions for the first and second group, yielding a true effect size of $$\begin{array}{*{20}l} \delta=\frac{(15.01-19.91)}{\sqrt{((3.4^{2}+5.8^{2})/2)}}\approx -1.03 \end{array} $$ For each of the four effect size settings, 10,000 datasets following the corresponding group distributions as detailed above were simulated. This procedure was repeated for different samples sizes n, ranging from n=10 to n=100 in steps of size 10 to investigate the influence of sample size on the indices. In each case, the traditional p-value, the Bayes factor BF10, the ROPE 95%, the full ROPE, the probability of direction, the MAP-based p-value and the e-value \(\overline {\text {ev}}(H_{0})\), that is the evidence against H0:δ=0 were computed. The Bayes factor was calculated as the Jeffreys-Zellner-Siow Bayes factor for the null hypothesis H0:δ=0 of no effect against the alternative H1:δ≠0, see Rouder et al. [26] and Gronau et al. [28]. More precisely, the calculated quantities are (1) the Bayes factor, a single number that quantifies the evidence for the presence or absence of an effect and (2) the posterior distribution, which quantifies the uncertainty about the size of the effect under the assumption H1:δ≠0 that it exists. This posterior distribution (2) of the effect size δ was then used to compute the 95% ROPE, the full ROPE, the PD and the MAP-based p-value as well as the e-value \(\overline {\text {ev}}(H_{0})\). The traditional p-value was obtained via a two-sample Welch's t-test. The above procedure was conducted three times with the prior on the effect size δ set to three different hyperparameters to investigate the influence of the prior modelling: A noninformative Jeffrey's prior was always put on the standard deviation of the normal population, while a Cauchy prior was placed on the standardised effect size. The Cauchy prior \(C(0,\sqrt {2}/2)\) was used in the first setting, C(0,1) in the second and \(C(0,\sqrt {2})\) in the third, corresponding to a medium, wide and ultrawide prior on the effect size δ. This way, the influence of the prior modelling on the resulting indices can be measured. To get more insights about the e-value \(\overline {\text {ev}}(H_{0})\), for each prior setting \(\overline {\text {ev}}(H_{0})\) was once computed using a flat improper reference density r(δ)∝1 (that is, the surprise function equals the posterior distribution), and once using the Cauchy prior assigned to δ as a reference density in the surprise function s(δ). Finally, the above procedure was repeated for the fixed sample size n=30 to investigate the influence of noise. n=30 samples were simulated in each group to control for the influence of sample size and Gaussian noise \(\mathcal {N}(0,\varepsilon)\) was added to the group data x and y, where ε was selected as ε=0.5 to ε=5 in steps of 0.5. The percentage of significant results was computed for samples of increasing size n as the number of significant results divided by 10,000. This number is an estimate for the type I error probabilities of the indices, a quantity crucial for reproducible research [29]. Significant is defined here as follows: A Bayes factor BF10≥3. A posterior distribution using the 95% ROPE or full ROPE is significant when it is located completely outside the corresponding ROPE [−0.1,0.1] around δ=0. The MAP-based p-value is significant when pMAP<0.05. The p-value is significant when p<0.05. The PD is significant when PD=1 or PD=0, and the e-value is significant when \(\overline {\text {ev}}(H)>0.95\) (no matter whether a flat reference density or the Cauchy reference density was used). The statistical programming language R was used [30] for the simulations. The Bayes factor was computed via Gaussian quadrature in the R package BayesFactor [31], which was also used to obtain the posterior distribution of δ under the alternative H1 of an existing effect. The package bayestestR [32] was used to compute the 95% ROPE, full ROPE, PD and MAP-based p-value. The evidence \(\overline {\text {ev}}\) against H0:δ=0 in the FBST was computed with the posterior Markov-Chain-Monte-Carlo draws of the posterior distribution of δ provided by the BayesFactor package [31]. These posterior draws were interpolated to construct a posterior density of δ, which was then integrated numerically over the tangential set of H0 as required for \(\overline {\text {ev}}(H_{0})\). For more details, also about the random number generator seed, a commented replication script, which can reproduce all results and figures, is provided at the Open Science Foundation under https://osf.io/fbz4s/. Influence of sample size and prior modelling Figure 3 shows the dependence of the Bayesian indices on sample size for four different effect sizes using the ultrawide prior \(C(0,\sqrt {2})\). The four plots in each row show the succession of the results for no effect, a small effect, a medium effect and finally a large effect, while the x-axis shows increasing sample size n=10 to n=100 in each group in steps of 10. Influence of the sample size n on Bayesian significance and effect size indices for small, medium, large and no existing effects using an ultrawide prior \(C(0,\sqrt {2})\) on the effect size δ The left plot of the first row shows that the p-value is distributed uniformly under the null hypothesis H0:δ=0. If the alternative H1:δ≠0 is true, the three figures right beneath show that for increasing sample size n, the p-value becomes significant, where the necessary sample size for stating significance decreases with increasing actual effect size δ. The second row shows the succession for the Bayes factor BF10. The left plot shows, that under the null hypothesis H0:δ=0 the Bayes factor correctly converges to zero (in contrast to the p-value). This property opens the possibility of confirming the null hypothesis, which is not possible via an ordinary p-value. The three figures right of this plot show the progression of the Bayes factor BF10 for increasing effect size. Here, the Bayes factor accumulates more and more evidence for the alternative H1:δ≠0 for small, medium and large effect sizes. For more substantial effect sizes, the Bayes factor requires a much smaller sample size to state evidence for the alternative. The plots are limited to a y-range of [0,100] (except for the first plot) for better visibility, as BF10 becomes very large quickly. The third and fourth row shows the results for the 95% and full ROPE [−0.1,0.1] around the effect size δ=0. Under the null, in both cases, the percentage of the posterior's probability mass inside the ROPE increases. As δ=0 under the null, for n→∞, the posterior will eventually concentrate completely inside the ROPE, but the necessary sample size can be considerable. From the figure, it becomes clear that for n=100, about 50% of the probability mass of the posterior is located inside the ROPE [−0.1,0.1] around δ=0. For increasing sample size n, this percentage will finally become 100%. Considering the 95% and full ROPE, even for small sample sizes like n=10 the majority of values shows that at least 10% of the posterior is located inside the ROPE so that hardly any false-positive statements are produced. Under the alternative H1:δ≠0, both the 95% and full ROPE show that the percentage of the posterior located inside the ROPE [−0.1,0.1] of no effect converges to zero for increasing sample size n. For increasing effect size δ, the necessary sample size n needed to reject the null hypothesis H0 (based on an equivalence test or an estimation under uncertainty perspective as detailed by Kruschke [19]) becomes smaller. The fifth row shows the results for the probability of direction (PD). Under the null hypothesis H0:δ=0, the PD is not uniformly distributed as was the case for p-values. The PD concentrates at about 70% here (see the scaling of the y-axis), which does not reflect the true effect size of δ=0, which should yield a PD near 50%. Still, under the alternative H1:δ≠0, the PD converges to 100% if sample sizes grow. The speed of convergence is faster for larger effect sizes δ≠0. The MAP-based p-value shown in the sixth row shows a behaviour similar to the classic p-value. One difference is that under the null hypothesis H0, it is much larger on average than the traditional p-value. Still, this behaviour is robust to increasing sample size n and a correct interpretation of the MAP-based p-value only allows to state significance when pMAP is smaller than a significance threshold. Interpreting large pMAP as evidence for H0 is not allowed at all. Under the alternative H1, the behaviour is quite similar to the classic p-value: For increasing sample size n, the MAP-based p-value becomes significant, where the necessary sample size n for stating significance decreases with increasing effect size δ. The evidence \(\overline {\text {ev}}(H_{0})\) (in the following denoted as \(\overline {\text {ev}}\)) under the flat improper reference density r(δ)∝1 is shown in the seventh row and concentrates around δ=0.5 under the null hypothesis H0:δ=0. The reason for this can be seen in the fact that the posterior of δ concentrates for n→∞ around δ=0 if H0:δ=0 is true, and the posterior density p(δ|x) also concentrates around δ=0 with slight fluctuations happening due to the randomness in simulation. The only thing that changes when increasing sample size n is thus the scaling of the x-axis of the posterior p(δ|x), so that \(\overline {\text {ev}}\) is not influenced at all by increasing sample size. The support for H0 can easily be obtained by calculating ev\((H_{0})=1-\overline {\text {ev}}(H_{0})\), which in this case also concentrates around 0.5, instead of concentrating around 1. If on the other hand H1:δ≠0 is true, \(\overline {\text {ev}}\) quickly signals evidence against H0 for increasing sample size n and increasing effect size δ, as shown by the three right-hand plots in the seventh row. When using the medium Cauchy prior \(C(0,\sqrt {2}/2)\) instead of the improper reference density r(δ)∝1, the situation is similar, but the plots in the last row in Fig. 5 show that the evidence \(\overline {\text {ev}}\) against H0 accumulate faster then if H1 is true. Figure 4 shows the results of the simulation when using a wide prior C(0,1) instead of the ultrawide prior \(C(0,\sqrt (2))\). The classic p-value is of course not affected at all from this prior change. The BF10 shown in the second row is slightly larger under the alternative H1:δ≠0, as the wide prior C(0,1) becomes more informative compared to the ultrawide prior \(C(0,\sqrt {2})\). The probability mass located around δ=0 becomes more concentrated when using the wide C(0,1) prior instead of the ultrawide \(C(0,\sqrt 2)\) prior, and therefore BF10 is increased (compare the boxplots in Figs. 3 and 4). Influence of the sample size n on Bayesian significance and effect size indices for small, medium, large and no existing effects using a wide prior C(0,1) on the effect size δ Influence of the sample size n on Bayesian significance and effect size indices for small, medium, large and no existing effects using a medium prior \(C(0,\sqrt {2}/2)\) on the effect size δ For the same reasons, the percentage of probability mass inside the 95% and full ROPE increases under the null H0:δ=0, as shown by the third and fourth row in Fig. 4. More prior mass around δ=0 due to the narrower C(0,1) prior on δ leads to more posterior mass inside the ROPE [−0.1,0.1] around δ=0. Under the alternative H1, the 95% and full ROPE suffer from this change, as shown in the boxplots for small, medium and large effects in rows three and four, which are shifted up slightly. The increase of probability mass near δ=0 draws the posterior towards δ=0, and it becomes harder for the posterior to concentrate outside of the ROPE. Nevertheless, for increasing sample size, the ROPEs finally reveal evidence for the alternative H1. Note that due to the concentration of probability mass around zero when using the C(0,1) prior, the boxplots of the ROPEs are shifted slightly up under the null hypothesis of no effect. The same holds for the PD, which also needs a larger sample size now to achieve the same evidence for the alternative when an effect is present. No matter whether a small, medium or large effect size is present, all boxplots shift down slightly, indicating that less probability mass is strictly positive in the posteriors produced. The narrower prior distribution seems to shrink the complete posterior distribution towards smaller values, leading in turn to a smaller PD. The MAP-based p-value is also influenced by the narrower prior: Due to the increased probability mass near δ=0, the MAP-estimate of δ shrinks towards δ=0. In combination with the larger value of the prior C(0,1) at the point-null value δ0=0 compared to the point-null value of the ultrawide prior \(C(0,\sqrt {2})\), the ratio calculated for the MAP-based p-value decreases, leading to larger MAP-based p-values and slightly upshifted boxplots under the alternative H1. The last two rows show \(\overline {\text {ev}}\) under the improper reference density r(δ)∝1. Barely any change can be observed compared to the setting using the ultrawide prior \(C(0,\sqrt {2})\), which is confirmed in the seventh row. Under the wide Cauchy prior reference density r(δ)=C(0,1), the evidence against H0:δ=0 again concentrates around \(\overline {\text {ev}}=0.5\), indicating neither strong evidence against H0 nor support for H0. Compared to the ultrawide prior used in Fig. 3, under the alternative H1:δ≠0 the evidence \(\overline {\text {ev}}\) against H0:δ=0 also barely changes. These results show that the e-value is quite robust against variations in the prior modelling. Figure 5 shows the results when using a medium prior instead of a wide one. The classic p-value is again not affected from this prior, so the results are identical. In contrast to Figs. 3 and 4, the Bayes factor now accumulates evidence even faster, because the medium prior is even more informative than the wide and ultrawide one. The 95% and full ROPE boxplots are shifted up even higher therefore under H0, showing that switching from the noninformative ultrawide and weakly informative wide prior to the medium prior yields larger percentages of the posterior distributions probability mass inside the ROPE under the null hypothesis H0 as even more probability mass concentrates around δ0=0 now. From a Bayesian perspective, the null hypothesis is thus faster confirmed. Under the alternative H1:δ≠0, the medium prior makes it now even harder for the 95% and full ROPE to reject the null hypothesis. This is again due to the fact that under the medium prior \(C(0,\sqrt {2}/2)\) the prior allocates again more probability mass to values near δ0=0 than under the ultrawide \(C(0,\sqrt {2})\) or wide Cauchy prior C(0,1). Therefore, the posterior shifts more slowly away from the ROPE [−0.1,0.1] of no effect, and therefore for the same sample size n, the posterior mass located inside the ROPE is larger when using the medium prior on δ. Still, for increasing sample size, this effect vanishes and even under the medium prior distribution, the concentration of posterior mass inside the ROPE converges to zero. The same phenomenon holds for the PD and the MAP-based p-value. Here too, under the alternative the narrower prior on δ around zero makes it harder for the PD and MAP-based p-value to accumulate evidence for the alternative H1. For increasing sample size n, both the PD and the MAP-based p-value still finally reject the null hypothesis. For a fixed sample size n, the same is achieved faster under the ultrawide and wide prior, which have less prior probability mass near δ0=0. Considering \(\overline {\text {ev}}\) in the last two rows, under the improper reference density r(δ)∝1 again barely any changes can be observed compared to the setting using the ultrawide \(C(0,\sqrt {2})\) or wide C(0,1) prior, which is confirmed in the seventh row of Fig. 5. Under the medium Cauchy prior reference density \(r(\delta)=C(0,\sqrt {2}/2)\), the evidence against H0:δ=0 again concentrates around \(\overline {\text {ev}}=0.5\), indicating neither strong evidence against H0 nor support for H0. Compared to the ultrawide and wide priors used in Figs. 3 and 4, under the alternative H1:δ≠0 the evidence \(\overline {\text {ev}}\) against H0:δ=0 again is barely influenced by shifting to the medium Cauchy prior, showing strong robustness of the e-value against the prior modelling. At this point, the results show that both the MAP-based p-value, the classic p-value and the e-value \(\overline {\text {ev}}\) cannot state evidence for the null hypothesis in addition to being able to state evidence for the alternative. These measures can only reject the null hypothesis H0 and offer no possibility to confirm the null hypothesis. For practical research, this is limiting. Also, the PD stabilises at about 75%, which is the middle of its possible extremes, 50% and 100%. It would be desirable that the PD converges to 50% under the null H0:δ=0, to show that both a positive and negative effect are equally possible. Given the behaviour of the PD under the null, it seems that the PD favours the directed alternative δ>0 although the null H0:δ=0 is true. Under the alternative, H1:δ≠0, the PD as well as the p-value and MAP-based p-value behave as expected. Note that Pereira and Stern [15] created the e-value to test a sharp hypothesis H0, and rejection of H0 was the intended goal of the procedure. In contrast to the p-value and MAP-based p-value, the e-value enjoys a multitude of highly desirable properties like compliance with the likelihood principle, being a probability value derived from the posterior distribution, and possessing a version which is invariant to alternative parameterisations, see also [16]. Therefore, the e-value is preferable over the standard p-value and MAP-based p-value, also because of its robustness to the prior selection. The Bayes factor BF10, the 95% and full ROPE have two desirable properties: Under the null, all three measures indicate evidence for H0:δ=0 while under the alternative H1:δ≠0, they indicate evidence for H1. It is somehow problematic while not astonishing that both constructs accumulate evidence faster under the null H0 using a medium prior, than when using a wide or ultrawide prior. Under the alternative, evidence for H1 accumulates faster when using a wide or ultrawide prior instead of a medium one. Thus, when using a medium prior, finding evidence for H0 is easier than finding evidence for H1 both with the BF and the ROPEs. Using a wide or ultrawide prior, finding evidence for H1 is easier than finding evidence for H0 with the BF and the ROPEs. Therefore, we recommend using the wide prior C(0,1), which places itself in the middle between these two extremes. Using a medium or ultrawide prior needs further justification, because otherwise, some kind of cherry-picking could happen by combining Bayes factors or ROPEs with a medium, wide or ultrawide prior depending on the goal of rejection or confirmation of the null hypothesis. Note that the e-value showed strong robustness to the prior selection. Therefore, if the rejection of a research hypothesis is the formulated goal of the scientific enterprise, the e-value based on the FBST procedure with the corresponding Cauchy prior as reference density in the surprise function may prevent such cherry-picking. The take-away message regarding the prior modelling here is that the combination of prior and significance and effect size measure together can make it easier to find evidence for some hypotheses, which is problematic. Also, taking into account that the focus of research is to reveal relevant differences (clinically, in biomedical research for example), it is recommended to use at least n=100 patients in each group to ensure that also small effects can be detected reliably. Influence of noise Figure 6 shows the results for the influence of noise on Bayesian indices of significance and effect size. As expected and shown in the first row, the influence of noise on the classic p-value under the null H0 is negligible. Under the alternative, the p-value gets disturbed more and more with increasing noise ε. The number of significant p-values reduces for increasing noise as shown by the boxplots, which are shifted upwards more and more when noise ε increases. Influence of noise ε on Bayesian significance and effect size indices for small, medium, large and no existing effects using a wide prior \(C(0,\sqrt {2})\) on the effect size δ and sample size n=30 in each groups The BF10 has the same problems: When the null hypothesis H0:δ=0 is true, the Bayes factor is not influenced much by noise. When on the other hand H1:δ≠0 is true, adding noise to the observations makes it more difficult for the Bayes factor to state evidence for the alternative H1:δ≠0. This behaviour is also revealed when comparing Figs. 3 and 6: The boxplots in the fourth plot of the second row in Fig. 3 show that the Bayes factor achieves higher values compared to the situation where noise is present, as shown in the fourth plot of the second row in Fig. 6. The 95% ROPE and full ROPE also suffer from increasing noise. Under the null hypothesis, the noise does not influence the percentage of posterior mass inside the ROPE, but under the alternative H1 increasing noise ε causes increasing amounts of posterior mass to be located inside the ROPE. This behaviour makes it harder for the ROPE to signal evidence for the alternative H1:δ≠0. The PD suffers from the same problem, as increasing noise causes the posterior to be more and more symmetric around δ0=0, indicated by the boxplots successively shifted down for increasing noise under H1. The MAP-based p-value is also not influenced by noise under the null hypothesis H0, but the boxplots are shifted up under the alternative, indicating that increasing noise leads to larger p-values and less significant ones, which makes it harder for the MAP-based p-value to reject the null hypothesis in the presence of noise. The e-value \(\overline {\text {ev}}\) is also barely influenced by noise under the null hypothesis H0 both when used in combination with the flat reference density r(δ)∝1 and the wide Cauchy reference density r(δ)=C(0,1). Under the alternative, increasing noise makes it harder for \(\overline {\text {ev}}\) to state evidence against H0 as shown in the last two rows of Fig. 6. Sensitivity and type I error rates Table 1 shows Monte Carlo estimates for the type I error rates and the percentage of significant indices based on the results of the previous simulations. For increasing sample size n, the type I error rates were estimated as the number of significant indices divided by 10,000 when no effect was present. Table 1 Percentage of significant Bayesian indices of significance and effect size for varying sample sizes for small, medium, large and no existing effects using a wide C(0,1) prior on the effect size δ In the cases where a small, medium or large effect was present, the percentage shows the number of significant measures divided by 10,000. Significant was defined as follows here: p<.05 for p-values, BF10≥3 for the Bayes factor, which equals moderate evidence according to Van Doorn et al. [23], a posterior which is located completely outside the 95% or full ROPE, and for the PD 100% of the posterior's mass needed to be strictly positive or negative. The e-value \(\overline {\text {ev}}\) against H0:δ=0 was required to be larger than 0.95, both when used with the improper reference density r(δ)∝1 and the wide Cauchy prior r(δ)=C(0,1) in the surprise function. Figure 7 visualises the results: The left plot corresponds to the table row of no effect and shows the type I error rates of the indices. As shown in the figure, the classic p-value fluctuates around its nominal significance level of α=.05, although there is no effect present. In contrast, most Bayesian indices have lower type I error rates about half the size as the classic p-value. A comparison of the Bayesian posterior indices reveals three groups: The first group consists of the Bayes factor BF10, the 95% ROPE and the MAP-based p-value. These indices concentrate around a false-positive rate of about 1% for increasing sample size. Still, the Bayes factor and ROPE make more type I errors for small sample size, while the MAP-based p-value makes more for large sample sizes. The second group consists of the PD and the full ROPE, both of which make practically no type I error independent of the sample size n. This fact can be attributed to the quite conservative behaviour of both indices compared to the indices in group one. The third group consists of the e-value with improper or wide Cauchy prior, which achieves type I error rates slightly smaller than the traditional p-value, but more massive than the other Bayesian indices. Sensitivity of Bayesian significance and effect size indices for small, medium, large and no existing effects using a wide prior C(0,1) on the effect size δ and varying sample size n The second plot corresponds to the small effect part of Table 1. Now the desired behaviour is that the indices detect the existing effect for the smallest possible sample size n. The classic p-value has the most liberate behaviour in stating that an effect is present, which reflects the often criticised fact that p-values overstate the significance of an effect compared to other indices of effect size and significance, see Wasserstein and Lazar [3]. The Bayesian indices signal evidence for the alternative more slowly than their frequentist counterparts, and again the three groups already discovered in the first plot reveal themselves here: The BF10, the 95% ROPE and the MAP-based p-value detect the small effect more often than the indices of the second group, which again includes the full ROPE and the PD. The third group consisting of the two versions of the e-value shows similar behaviour as the p-value: They signal the existence of an effect more quickly than their Bayesian competitors, which comes at the cost of increased type I errors as shown in the left plot previously. The third and fourth plot correspond to the medium and large effect part of Table 1 and confirm the previous analysis. The p-value and e-value(s) state significance more often than every other index, but BF10, the 95% ROPE and the MAP-based p-value yield a similar behaviour for increasing effect size δ now. Also, from the succession of the PD and full ROPE, it becomes clear that the PD more often states the presence of an effect in contrast to the full ROPE, which is more conservative, even for increasing effect size. Still, for increasing sample size, these "slow" indices eventually state the presence of the effect, too. Interestingly, the MAP-based p-value has a similar behaviour for large effect sizes as the full ROPE and PD, as shown in the right plot of Fig. 7. The behaviour of the e-value again shows substantial similarity to the behaviour of the p-value under the medium and large effect setting. This paper studied the behaviour of common Bayesian significance and effect size indices for the setting of two-sample Welch's t-test, which is often applied in the analysis of clinical trial data. To guide researchers in choosing an appropriate index when the Bayesian counterpart to Welch's two-sample t-test as proposed by Rouder et al. [26] is used instead, an extensive simulation study analysed the influence of sample size n, the prior modelling and noise ε. Also, the type I error rates and sensitivities to detect an existing effect were studied. The results show that one can split Bayesian significance and effect size indices into two categories: Indices which can state evidence for the null hypothesis H0:δ=0and the alternative H1:δ≠0, and indices which can only state evidence for the alternative. The first group consists of the Bayes factor, the 95% and full ROPE. The MAP-based p-value, the PD and the e-value belong to the second group, the MAP-based p-value and the e-value showing a similar behaviour as the classic p-value. Note that formally the e-value belongs to the first group, but the simulation results showed that stating evidence for the null hypothesis H0 is not achieved under the null hypothesis H0 by the e-value. On the other hand, the e-value showed the best performance compared to all other indices when H1 was true, and based on its other properties – for a review see Pereira, Stern and Wechsler [16] – it is preferable over the MAP-based p-value, PD and classic p-value. The PD suffers from the fact that under H0 it stabilizes at about 0.7, which is unintuitive and has to be interpreted as a tendency to favour evidence for the alternative when in fact the null hypothesis H0 is true, see Figs. 3, 4 and 5. Thus, when rejection of a null hypothesis is the goal, we recommend using the FBST and reporting the e-value based on the corresponding Cauchy prior as reference density in the surprise function. Also, the e-value is following the likelihood principle and is robust against the prior modelling, avoiding cherry-picking. If the goal of the scientific enterprise is to confirm a research hypothesis, based on the results, the Bayes factor, the 95% ROPE or the full ROPE should be considered. All three indices show similar behaviour regarding increasing sample size n, and state both evidence for H0 and H1 depending on the presence of an effect. The prior modelling showed that both the ultrawide and medium prior on δ could lead to cherry-picking by combining a selected index like a ROPE or BF with the prior: For example, choosing a medium prior when the goal is to confirm H0, evidence for H0 accumulates faster than when using a wide or ultrawide prior. If the goal is to find evidence for the alternative, evidence for H1 accumulates faster when using a wide or ultrawide prior instead of a medium one. Therefore, we recommend using the wide prior C(0,1) when the goal is to confirm a hypothesis, as this choice places itself in the middle between the two other extremes and prevents cherry-picking in the case where no prior information is available. The analysis of the influence of noise showed that all Bayesian indices suffered from increasing noise under H1 with no apparent patterns or regularities, or one of the indices being more robust to noise than the others. The type I error rates, and the sensitivity to detect an existing effect revealed that all Bayesian indices should be preferred to the classic p-value, although the e-value showed only slightly reduced type I error rates compared to the traditional p-value. This result is essential, as the control of type I error rates is one of the most critical aspects in clinical trials, see McElreath [29] and Ioannidis [7]. The results showed further that the full ROPE and the PD achieve the best control of type I errors. As the PD cannot transparently state evidence for the null as shown previously, we recommend using the full ROPE to control type I errors in clinical trials. While the Bayes factor, the MAP-based p-value, the e-value and the 95% ROPE are more sensitive and detect more effects when using the same sample size n, their type I error rate control is weaker. To guide researchers in the selection of an appropriate index for clinical trials, we recommend to use the full ROPE in general because of the following reasons: As the Bayes factor and 95% ROPE, the full ROPE can state evidence for both the null and the alternative hypothesis. The influence of sample size n, noise ε and prior modelling is similar for all three indices, but the type I error rate control is better for the full ROPE. The slightly weaker sensitivity to existing effects can be overcome by simply increasing the study sample size n, as shown in Fig. 7: For sample sizes of n=100, the sensitivity is nearly equal to the sensitivity of the Bayes factor and 95% ROPE when a large effect is present. When medium or small effects are present, larger sample sizes are required, but as often multiple hundreds of patients participate in clinical trials, the benefits of type I error control overshadow the higher costs incurred by increased sample size.Footnote 1 Therefore, researchers and clinicians should benefit from using the full ROPE in the analysis of clinical trial data when conducting a two-sample Bayesian t-test through better type I error control and precise effect size estimation. The datasets generated and/or analysed during the current study as well as a full replication script to reproduce all results are available in the Open Science Framework (OSF) repository, https://osf.io/fbz4s/. In the rare situation where the type I error rate is of less importance, we recommend to use the e-value instead, as it has the best sensitivity to detect an existing effect of all indices analysed, and is an attractive Bayesian replacement of the traditional p-value. Null hypothesis significance testing ROPE: Region of practical equivalence Probability of direction MAP-based p-value: Maximum a posteriori based p-value RCT: randomized clinical trial ASA: American statistical association JASP: Jeffreys awesome statistics package (software) SPSS: Statistics package for the social sciences Nuijten MB, Hartgerink CHJ, van Assen MALM, Epskamp S, Wicherts JM. The prevalence of statistical reporting errors in psychology (1985-2013). Behav Res Methods. 2016; 48(4):1205–26. https://doi.org/10.3758/s13428-015-0664-2. PubMed Article Google Scholar Wetzels R, Matzke D, Lee MD, Rouder JN, Iverson GJ, Wagenmakers EJ. Statistical evidence in experimental psychology: An empirical comparison using 855 t tests. Perspect Psychol Sci. 2011; 6(3):291–8. https://doi.org/10.1177/1745691611406923. Wasserstein RL, Lazar NA. The ASA's Statement on p-Values: Context, Process, and Purpose. The American Statistician. 2016; 70(2):129–133. https://doi.org/10.1080/00031305.2016.1154108. http://arxiv.org/abs/1011.1669. Wasserstein RL, Schirm AL, Lazar NA. Moving to a World Beyond "p<0.05". Am Stat. 2019; 73(sup1):1–19. https://doi.org/10.1080/00031305.2019.1583913. Matthews R, Wasserstein R, Spiegelhalter D. The ASA's p-value statement, one year on. Significance. 2017; 14(2):38–41. https://doi.org/10.1111/j.1740-9713.2017.01021.x. Ioannidis JPA. What Have We (Not) Learnt from Millions of Scientific Papers with p-Values?Am Stat. 2019; 73:20–5. https://doi.org/10.1080/00031305.2018.1447512. Ioannidis JPA. Why Most Clinical Research Is Not Useful. PLoS Med. 2016; 13(6):1002049. https://doi.org/10.1371/journal.pmed.1002049. Benjamin DJ, Berger JO, Johannesson M, Nosek BA, Wagenmakers E-J, Berk R, Bollen KA, Brembs B, Brown L, Camerer C, Cesarini D, Chambers CD, Clyde M, Cook TD, De Boeck P, Dienes Z, Dreber A, Easwaran K, Efferson C, Fehr E, Fidler F, Field AP, Forster M, George EI, Gonzalez R, Goodman S, Green E, Green DP, Greenwald AG, Hadfield JD, Hedges LV, Held L, Hua Ho T, Hoijtink H, Hruschka DJ, Imai K, Imbens G, Ioannidis JPA, Jeon M, Jones JH, Kirchler M, Laibson D, List J, Little R, Lupia A, Machery E, Maxwell SE, McCarthy M, Moore DA, Morgan SL, Munafó M, Nakagawa S, Nyhan B, Parker TH, Pericchi L, Perugini M, Rouder J, Rousseau J, Savalei V, Schönbrodt FD, Sellke T, Sinclair B, Tingley D, Van Zandt T, Vazire S, Watts DJ, Winship C, Wolpert RL, Xie Y, Young C, Zinman J, Johnson VE. Redefine statistical significance. Nat Hum Behav. 2018; 2(1):6–10. https://doi.org/10.1038/s41562-017-0189-z. Etz A, Wagenmakers E-J. J. B. S. Haldane's Contribution to the Bayes Factor Hypothesis Test. Stat Sci. 2015; 32(2):313–29. https://doi.org/10.1214/16-STS599. http://arxiv.org/abs/1511.08180. Ly A, Verhagen J, Wagenmakers EJ. An evaluation of alternative methods for testing hypotheses, from the perspective of Harold Jeffreys. J Math Psychol. 2016; 72:43–55. https://doi.org/10.1016/j.jmp.2016.01.003. Jeffreys H. Theory of Probability, 3rd edn.Oxford: Oxford University Press; 1961. Kruschke JK, Liddell TM. The Bayesian New Statistics : Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychon Bull Rev. 2018; 25:178–206. https://doi.org/10.3758/s13423-016-1221-4. Makowski D, Ben-Shachar MS, Chen SHA, Lüdecke D. Indices of Effect Existence and Significance in the Bayesian Framework. Front Psychol. 2019; 10:2767. https://doi.org/10.3389/fpsyg.2019.02767. Mills J. Objective Bayesian Hypothesis Testing; 2017. https://economics.ku.edu/sites/economics.ku.edu/files/files/Seminar/papers1718/april20.pdf. De Bragança Pereira CA, Stern JM. Evidence and credibility: Full Bayesian significance test for precise hypotheses. Entropy. 1999; 1(4):99–110. https://doi.org/10.3390/e1040099. Pereira CADB, Stern JM, Wechsler S. Can a significance test be genuinely bayesian?Bayesian Analysis. 2008; 3(1):79–100. https://doi.org/10.1214/08-BA303. Robert CP. The expected demise of the Bayes factor. J Math Psychol. 2016; 72(2009):33–7. https://doi.org/10.1016/j.jmp.2015.08.002. http://arxiv.org/abs/1506.08292. Ly A, Verhagen J, Wagenmakers EJ. Harold Jeffreys's default Bayes factor hypothesis tests: Explanation, extension, and application in psychology. J Math Psychol. 2016; 72:19–32. https://doi.org/10.1016/j.jmp.2015.06.004. Kruschke JK. Rejecting or Accepting Parameter Values in Bayesian Estimation. Adv Methods Pract Psychol Sci. 2018; 1(2):270–80. https://doi.org/10.1177/2515245918771304. Cohen J. Statistical Power Analysis for the Behavioral Sciences, 2 edn.Hillsdale: Routledge; 1988. Kamary K, Mengersen K, Robert CP, Rousseau J. Testing hypotheses via a mixture estimation model. arXiv preprint. 2014:1–37. https://doi.org/10.16373/j.cnki.ahr.150049. http://arxiv.org/abs/1412.2044. Kass RE, Raftery AE, Association S, Jun N. Bayes factors. J Am Stat Assoc. 1995; 90(430):773–95. van Doorn J, van den Bergh D, Bohm U, Dablander F, Derks K, Draws T, Evans NJ, Gronau QF, Hinne M, Kucharský Š, Ly A, Marsman M, Matzke D, Raj A, Sarafoglou A, Stefan A, Voelkel JG, Wagenmakers E-J. The JASP Guidelines for Conducting and Reporting a Bayesian Analysis. PsyArxiv Preprint. 2019. https://doi.org/10.31234/osf.io/yqxfr. http://arxiv.org/abs/osf.io/yqxfr. Kruschke JK. Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, Second Edition. Oxford: Academic Press; 2015, pp. 1–759. https://doi.org/10.1016/B978-0-12-405888-0.09999-2. http://arxiv.org/abs/arXiv:1011.1669v3. Stern JM, Pereira CAdB. The e-value: A Fully Bayesian Significance Measure for Precise Statistical Hypotheses and its Research Program. arXiv preprint. 2020:0–3. https://doi.org/arXiv:2001.10577v1. http://arxiv.org/abs/arXiv:2001.10577v2. Rouder JN, Speckman PL, Sun D, Morey RD, Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev. 2009; 16(2):225–37. https://doi.org/10.3758/PBR.16.2.225. Kruschke JK. Bayesian estimation supersedes the t-test,. J Exp Psychol Gen. 2013; 142(2):573–603. https://doi.org/10.1037/a0029146. http://arxiv.org/abs//dx.doi.org/10.1037/a0029146. Gronau QF, Ly A, Wagenmakers E-J. Informed Bayesian t -Tests. Am Stat. 2019; 00(0):1–7. https://doi.org/10.1080/00031305.2018.1562983. McElreath R, Smaldino PE. Replication, communication, and the population dynamics of scientific discovery. PLoS ONE. 2015; 10(8):1–16. https://doi.org/10.1371/journal.pone.0136088. R Core Team. R: A Language and Environment for Statistical Computing. 2019. https://www.r-project.org/. Morey RD, Rouder JN. BayesFactor: Computation of Bayes Factors for Common Designs. 2018. https://cran.r-project.org/package=BayesFactor. Makowski D, Ben-Shachar MS, Lüdecke D. bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework. J Open Source Softw. 2019; 4(40). https://doi.org/10.21105/joss.01541. The quality of a first draft of the manuscript was improved by the helpful comments of Julio Michael Stern, who pointed the author towards the FBST and the e-value. Also, the author thanks Bruno Mario Cesana, M.D., whose comments clearly helped in improving the overall quality of the manuscript. The author also thanks the Center for Media and Computing Technology at University of Siegen for access to their high-performance computing cluster. Department of Mathematics, University of Siegen, Walter-Flex-Str. 3, Siegen, Germany Riko Kelter The author(s) read and approved the final manuscript. Correspondence to Riko Kelter. The author declares that he has no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Kelter, R. Analysis of Bayesian posterior significance and effect size indices for the two-sample t-test to support reproducible medical research. BMC Med Res Methodol 20, 88 (2020). https://doi.org/10.1186/s12874-020-00968-2 Bayesian significance and effect measures Bayesian testing Student's t-test Bayesian biostatistics
CommonCrawl
Bridgeland stability condition In mathematics, and especially algebraic geometry, a Bridgeland stability condition, defined by Tom Bridgeland, is an algebro-geometric stability condition defined on elements of a triangulated category. The case of original interest and particular importance is when this triangulated category is the derived category of coherent sheaves on a Calabi–Yau manifold, and this situation has fundamental links to string theory and the study of D-branes. Such stability conditions were introduced in a rudimentary form by Michael Douglas called $\Pi $-stability and used to study BPS B-branes in string theory.[1] This concept was made precise by Bridgeland, who phrased these stability conditions categorically, and initiated their study mathematically.[2] Definition The definitions in this section are presented as in the original paper of Bridgeland, for arbitrary triangulated categories.[2] Let ${\mathcal {D}}$ be a triangulated category. Slicing of triangulated categories A slicing ${\mathcal {P}}$ of ${\mathcal {D}}$ is a collection of full additive subcategories ${\mathcal {P}}(\varphi )$ for each $\varphi \in \mathbb {R} $ such that • ${\mathcal {P}}(\varphi )[1]={\mathcal {P}}(\varphi +1)$ for all $\varphi $, where $[1]$ is the shift functor on the triangulated category, • if $\varphi _{1}>\varphi _{2}$ and $A\in {\mathcal {P}}(\varphi _{1})$ and $B\in {\mathcal {P}}(\varphi _{2})$, then $\operatorname {Hom} (A,B)=0$, and • for every object $E\in {\mathcal {D}}$ there exists a finite sequence of real numbers $\varphi _{1}>\varphi _{2}>\cdots >\varphi _{n}$ and a collection of triangles with $A_{i}\in {\mathcal {P}}(\varphi _{i})$ for all $i$. The last property should be viewed as axiomatically imposing the existence of Harder–Narasimhan filtrations on elements of the category ${\mathcal {D}}$. Stability conditions A Bridgeland stability condition on a triangulated category ${\mathcal {D}}$ is a pair $(Z,{\mathcal {P}})$ consisting of a slicing ${\mathcal {P}}$ and a group homomorphism $Z:K({\mathcal {D}})\to \mathbb {C} $, where $K({\mathcal {D}})$ is the Grothendieck group of ${\mathcal {D}}$, called a central charge, satisfying • if $0\neq E\in {\mathcal {P}}(\varphi )$ then $Z(E)=m(E)\exp(i\pi \varphi )$ for some strictly positive real number $m(E)\in \mathbb {R} _{>0}$. It is convention to assume the category ${\mathcal {D}}$ is essentially small, so that the collection of all stability conditions on ${\mathcal {D}}$ forms a set $\operatorname {Stab} ({\mathcal {D}})$. In good circumstances, for example when ${\mathcal {D}}={\mathcal {D}}^{b}\operatorname {Coh} (X)$ is the derived category of coherent sheaves on a complex manifold $X$, this set actually has the structure of a complex manifold itself. Technical remarks about stability condition It is shown by Bridgeland that the data of a Bridgeland stability condition is equivalent to specifying a bounded t-structure ${\mathcal {P}}(>0)$ on the category ${\mathcal {D}}$ and a central charge $Z:K({\mathcal {A}})\to \mathbb {C} $ on the heart ${\mathcal {A}}={\mathcal {P}}((0,1])$ of this t-structure which satisfies the Harder–Narasimhan property above.[2] An element $E\in {\mathcal {A}}$ is semi-stable (resp. stable) with respect to the stability condition $(Z,{\mathcal {P}})$ if for every surjection $E\to F$ for $F\in {\mathcal {A}}$, we have $\varphi (E)\leq ({\text{resp.}}<)\,\varphi (F)$ where $Z(E)=m(E)\exp(i\pi \varphi (E))$ and similarly for $F$. Examples From the Harder–Narasimhan filtration Recall the Harder–Narasimhan filtration for a smooth projective curve $X$ implies for any coherent sheaf $E$ there is a filtration $0=E_{0}\subset E_{1}\subset \cdots \subset E_{n}=E$ such that the factors $E_{j}/E_{j-1}$ have slope $\mu _{i}={\text{deg}}/{\text{rank}}$. We can extend this filtration to a bounded complex of sheaves $E^{\bullet }$ by considering the filtration on the cohomology sheaves $E^{i}=H^{i}(E^{\bullet })[+i]$ and defining the slope of $E_{j}^{i}=\mu _{i}+j$, giving a function $\phi :K(X)\to \mathbb {R} $ for the central charge. Elliptic curves There is an analysis by Bridgeland for the case of Elliptic curves. He finds[2][3] there is an equivalence ${\text{Stab}}(X)/{\text{Aut}}(X)\cong {\text{GL}}^{+}(2,\mathbb {R} )/{\text{SL}}(2,\mathbb {Z} )$ where ${\text{Stab}}(X)$ is the set of stability conditions and ${\text{Aut}}(X)$ is the set of autoequivalences of the derived category $D^{b}(X)$. References 1. Douglas, M.R., Fiol, B. and Römelsberger, C., 2005. Stability and BPS branes. Journal of High Energy Physics, 2005(09), p. 006. 2. Bridgeland, Tom (2006-02-08). "Stability conditions on triangulated categories". arXiv:math/0212237. 3. Uehara, Hokuto (2015-11-18). "Autoequivalences of derived categories of elliptic surfaces with non-zero Kodaira dimension". pp. 10–12. arXiv:1501.06657 [math.AG]. Papers • Stability conditions on $A_{n}$ singularities • Interactions between autoequivalences, stability conditions, and moduli problems
Wikipedia
Research article | Open | Published: 27 November 2015 Experimental evolution of recombination and crossover interference in Drosophila caused by directional selection for stress-related traits Dau Dayal Aggarwal1, Eugenia Rashkovetsky1, Pawel Michalak2, Irit Cohen1, Yefim Ronin1, Dan Zhou3, Gabriel G. Haddad3,4 & Abraham B. Korol1 BMC Biologyvolume 13, Article number: 101 (2015) | Download Citation Population genetics predicts that tight linkage between new and/or pre-existing beneficial and deleterious alleles should decrease the efficiency of natural selection in finite populations. By decoupling beneficial and deleterious alleles and facilitating the combination of beneficial alleles, recombination accelerates the formation of high-fitness genotypes. This may impose indirect selection for increased recombination. Despite the progress in theoretical understanding, interplay between recombination and selection remains a controversial issue in evolutionary biology. Even less satisfactory is the situation with crossover interference, which is a deviation of double-crossover frequency in a pair of adjacent intervals from the product of recombination rates in the two intervals expected on the assumption of crossover independence. Here, we report substantial changes in recombination and interference in three long-term directional selection experiments with Drosophila melanogaster: for desiccation (~50 generations), hypoxia, and hyperoxia tolerance (>200 generations each). For all three experiments, we found a high interval-specific increase of recombination frequencies in selection lines (up to 40–50 % per interval) compared to the control lines. We also discovered a profound effect of selection on interference as expressed by an increased frequency of double crossovers in selection lines. Our results show that changes in interference are not necessarily coupled with increased recombination. Our results support the theoretical predictions that adaptation to a new environment can promote evolution toward higher recombination. Moreover, this is the first evidence of selection for different recombination-unrelated traits potentially leading, not only to evolution toward increased crossover rates, but also to changes in crossover interference, one of the fundamental features of recombination. Unraveling the forces responsible for the nearly universal distribution of sex and recombination among eukaryotes is one of the central problems in evolutionary biology. Several classes of models based on the combinatorial consequences of recombination (initially suggested by Weismann [1]), have been developed to explain the maintenance of sex and recombination, including selection against deleterious mutations and combination of advantageous mutations [2–7], and genetic adaptation to varying environments, both biotic and abiotic [8–12]. Tight linkage between new and/or pre-existing beneficial and deleterious alleles should decrease the efficiency of natural selection, as a consequence of the Hill-Robertson effect [13], which includes various forms of interference in finite populations [14–17]. Recombination accelerates the formation of high-fitness genotypes, which in turn can indirectly select for higher recombination rates. The shared condition for such situations is negative linkage disequilibrium (LD; <0) between fitness loci, as a result of weak negative epistasis, spatially and temporally varying selection (biotic or abiotic), or genetic drift [14, 15, 18–20]. Despite the considerable progress in theoretical analyses over the last decade, the interplay between mutation, recombination and selection remains a controversial issue in evolutionary biology, partly due to a lack of robust empirical evidence. As noted by Barton [21], "…although the basic theoretical framework is clear, we still do not know whether selection is generally strong enough, and has the right form, to give a general advantage to sex and recombination". In this respect, it is worth mentioning the important and debated assumption of insufficient recombination as a limit to selection. Numerous studies support this hypothesis [20, 22–30], while opposite conclusions have also been reached [31–35] based on the idea that a low level of recombination should be sufficient to achieve most of the benefits associated with this process [36]. The existence of significant genetic variation for recombination is a precondition for efficient indirect selection for recombination. Such variation has indeed been demonstrated in many organisms [12, 37]. Experiments showing responses to direct selection for altered recombination frequency (rf) provide further evidence for genetic polymorphism at recombination-controlling loci [38–45]. A question arises as to whether selection for fitness-related traits can utilize this variation and lead to directional changes in rf. Theoretical models indicate that directional or variable selection for multilocus traits may promote evolution towards increased recombination [18, 46]. A considerable increase in rf as a result of selection for various traits unrelated to recombination has indeed been observed in a few studies with Drosophila melanogaster [47–52] (Additional file 1: Table S1). Simulation analysis suggests that interaction between drift and selection could be the source of LD <0 in most of the studies where increased recombination was caused by selection for unrelated quantitative traits [53]. Substantial evidence indicates that the observed frequency of double crossovers in adjacent intervals usually differs from the product of recombination rates in the two intervals, which is expected on the assumption of independence, a phenomenon termed crossover interference [54, 55]. The degree of interference is measured by the coefficient of coincidence (c), the ratio of observed to expected rates of double crossovers in target intervals: positive interference (c <1) corresponds to situations in which the occurrence of a crossover in one segment reduces the probability of exchange in the second segment, whereas negative interference (c >1) refers to situations in which the observed rate of double crossovers is higher than the expected rate with independence. Positive crossover interference is a common characteristic of meiotic organisms, with only a very few known exceptions (some fungi) where recombination proceeds with no interference ([56] and references therein). It is generally assumed that negative crossover interference is mainly associated with intragenic recombination (gene conversion). Nevertheless, cases are known of a higher than expected frequency of double crossovers in adjacent segments of small genetic but large physical length. In Drosophila melanogaster, a strong excess of double exchanges was reported within a 4 cM segment of chromosome 3 spanning the centromere and accounting for 25 % of its cytological length [57]. Similar results have been obtained in other Drosophila studies with autosomes [12, 58, 59], but not with the X chromosome [60]. Negative interference in Drosophila has also been shown to be associated with the interchromosomal effect of translocations on recombination and in situations with temperature-induced recombination [59]. It has been suggested that negative interference could be a characteristic of genomic regions with a low density of recombination events [57]. Despite numerous physical and formal models of interference and corresponding statistical tools to analyze experimental data on interference, only one attempt has been undertaken to explain interference as an evolvable feature [61]. As emphasized by Wang et al. [62], interference remains a mystery, an evolutionary conundrum. To our knowledge, this aspect has generally been overlooked, despite the interesting models aimed at understanding the mechanism. Herein, we report new results showing a substantial increase in recombination frequency and changes in crossover interference in directional selection experiments with D. melanogaster for desiccation, hypoxia, and hyperoxia tolerance. Novel elements include the facts that (1) the effect of long-term selection (50–200 generations) for three traits unrelated to recombination was evaluated over 16 marked intervals, with independent replicates and (2) in addition to increased recombination, relaxation of positive interference and the occurrence of significant negative interference were observed, which may be considered as first evidence of experimental evolution of crossover interference. For each of the three selection experiments, we estimated the recombination frequency and coefficient of crossover coincidence in backcrossed progeny (scheme in Additional file 2: Figure S1). Effect of selection for desiccation tolerance on recombination A highly significant interval-specific increase in rf was observed in each of the three large chromosomes in the selection lines compared to controls (Fig. 1, Table 1; Additional file 3). In the X chromosome, we observed a maximal relative increase in rf (δ rf ) in the proximal interval v-f (from 21.9 to 32.1 %, δ rf = 46.7 %, P = 3.1 × 10–18) and a moderate increase in the interval cv-v (from 19.7 to 26.4 %, δ rf = 34.3 %, P = 4.1× 10–9). In chromosome 2, increased rf in selection versus control lines was found in distal region net-dp of the 2 L arm (from 10.7 to 16.9 %, δ rf = 58.1 %, P = 2.5 × 10–8), proximal region cn-kn of the 2R arm (from 12.1 to 19.0 %, δ rf = 56.9 % P = 4.1 × 10–9), and c-px region of the 2R arm (from 24.1 to 29.0 %, δ rf = 20.0 %, P = 1.0 × 10–3). In chromosome 3, an increase in rf was only detected in the interval h-th: from 14.4 to 21.1 (δ rf = 46.6 %, P = 5.1 × 10–08). Altogether, in six out of the 16 intervals, significantly higher rf values were obtained in selection lines, and an opposite significant effect was not observed in any of the intervals (Table 1). The sum of rf estimates across the tested intervals in chromosome X has changed from 51.9 to 69.2 (δ = 33.5 %), in chromosome 2 from 91.9 to 109.3 (δ = 18.9 %), in chromosome 3 from 52.0 to 61.9 (δ = 19.1 %), and for all 16 scored intervals, from 195.8 to 240.4 (22.8 %). Change in recombination rates (± SE) in D. melanogaster caused by directional selection for desiccation tolerance. Significant increases in recombination rates were observed in selection lines (red) compared to control (blue) in intervals cv-v and v-f of chromosome X; net-dp, cn-kn, and c-px of chromosome 2; and h-th of chromosome 3. Asterisks indicate significant differences between selection and control at 0.01 and 0.001 levels using false discovery rate adjusted P values Table 1 Effect of desiccation selection on recombination rates in 16 regions of the D. melanogaster genome The increase in rf in the selection lines was accompanied by changes in crossover interference in adjacent and non-adjacent intervals (Table 2; Additional file 1: Table S2; Additional files 4 and 5). Thus, significant positive interference in the region y-cv-v of chromosome X in the control was replaced by no interference in the selection lines: the coefficient of coincidence c increased from 0.56 to 0.95 (P = 8.4 × 10–3). Moreover, in the cv-v-f region, significant positive interference in the control (ĉ = 0.70) changed to significant negative interference (ĉ = 1.40) in the selection lines; the difference between the two estimates was highly significant (P = 1.1 × 10–16). We did not find negative interference in chromosome 2, but the tendency towards significant relaxation of positive interference in the selection lines was expressed in both arms, e.g. in region net-dp-b in 2 L (from 0.35 to 0.81, P = 2.3 × 10–5) and in cn-kn-px in 2R (from 0.38 to 0.91, P = 1.3 × 10–6). n arm 2R, this tendency was also observed for pairs of non-adjacent intervals, e.g. for cn-kn_c-px, with ĉ = 0.41 in the control and ĉ = 0.95 in the selection lines (P = 3.1 × 10–6; Additional file 1: Table S2). As in chromosomes X and 2, selection caused a consistent and, in certain cases, highly significant tendency toward relaxation of positive interference in adjacent and non-adjacent intervals in chromosome 3. Moreover, in some pairs of intervals, significant positive interference was replaced by significant negative interference, e.g. in the h-cu-sr region, with ĉ = 0.41 in control and ĉ = 1.32 in selection lines (P = 5.1 × 10–7). Notably, we also observed a tendency toward relaxation of positive interference for intervals separated by the centromere. For example, segments ru-h and h-th are located in the 3 L arm, while cu-sr and sr-e are in the 3R arm (Additional file 2: Figure S2). A significant relaxation of positive interference was observed for several pairs of these intervals: for ru-h_cu-sr, coefficient ĉ changed from 0.52 to 1.14 (P = 2.3 × 10–3), for ru-h_cu-e from 0.41 to 0.92 (P = 4.0 × 10–4), and for ru-th_cu-sr from 0.36 to 0.73 (P = 5.8 × 10–3; Additional file 1: Table S3 and Additional file 5). As with adjacent intervals, replacement of significant positive interference in the control by significant negative interference in the selection lines was also found for non-adjacent intervals, e.g. for pair h-th_cu-sr, coefficient ĉ changed from 0.49 to 1.56 (P = 1.9 × 10–6). Table 2 Effect of desiccation selection on the coefficient of coincidence in adjacent intervals of the major chromosomes of D. melanogaster Effect of two-way selection for hypoxia/hyperoxia tolerance on recombination As with selection for desiccation tolerance, selection for both hypoxia and hyperoxia tolerance resulted in highly significant interval-specific increases in rf (Fig. 2, Table 3; Additional file 3). No significant decrease in rf was observed in any of the 16 marker intervals in either direction of selection. In total, indirect selection for increased recombination had a significant effect on more intervals in hypoxia lines than in hyperoxia lines (7 vs. 4). Fisher's exact test for the 2 × 2 contingency table of the outcomes of these two experiments across 16 intervals indicated their significant association (P = 0.019). The observed changes in rf were more pronounced in the lines selected for hypoxia tolerance (Table 3), excluding the reaction of the cv-v interval, with δ rf = 38.7 % (P = 5.1 × 10–8) and 43.7 % (P = 2.1 × 10–9) in hypoxia and hyperoxia lines, respectively. This interval was among the most reactive with respect to δ rf in the entire hypoxia/hyperoxia experiment. Other hyper-reactive intervals (all in hypoxia-tolerant lines) included net-dp in the 2 L arm, with δ rf = 39.4 % (P = 4.2 × 10–5), and cu-sr and sr-e in chromosome 3, with δ rf = 47.0 % (P = 1.8 × 10–3) and 56.9 % (P = 1.6 × 10–3), respectively . No change in rf was observed in the 2R arm. The sum of rf values across all 16 tested intervals changed from 192.9 to 228.7 (δ = 18.6 %) in hypoxia-selected lines and to 216.5 (δ = 12.2 %) in hyperoxia-selected lines. Change in recombination rates (±SE) in D. melanogaster caused by directional selection for (a) hypoxia and (b) hyperoxia tolerance. Significant increases in recombination rates were observed in hypoxia selection variant (red) compared to control (blue) in intervals y-cv, cv-v, and v-f of chromosome X; net-dp and dp-b of chromosome 2; and cu-sr and sr-e of chromosome 3. In hyperoxia selection variant, a significant increase in recombination rate was observed in all tested intervals of chromosome X, while only dp-b interval in chromosome 2 and in no interval of chromosome 3. Asterisks indicate significant differences between selection and control variants at 0.05 and 0.001 levels using false discovery rate adjusted P values Table 3 Effect of hypoxia and hyperoxia selection on recombination rates in 16 regions of the D. melanogaster genome Selection for hypoxia and hyperoxia tolerance also caused relaxation of positive interference and appearance of negative interference. In the X chromosome, the latter effect was expressed particularly strongly, in both directions of selection, in pairs of adjacent and non-adjacent intervals (Table 4; Additional file 1: Table S3 and Additional files 4 and 5). Remarkably, for the y-cv-f region, no interference in the control (ĉ = 1.05) changed to highly significant negative interference in the selection lines: ĉ = 2.12 in hypoxia (P = 9.5 × 10–28) and ĉ = 2.09 in hyperoxia (P = 6.8 × 10–25). A similar pattern, for either adjacent or non-adjacent pairs of intervals (net-dp-b, dp-b-pk, net-dp_b-pk, and net-dp_b-cn), was observed in the 2 L arm for both directions of selection (Additional file 1: Table S3). The difference between control and selection lines was more pronounced for hypoxia selection and for adjacent pairs of intervals. Although selection had no significant effect on rf in arm 2R, changes in crossover interference in adjacent and non-adjacent intervals of 2R were observed in the hyperoxia selection lines. Thus, for adjacent intervals, the coefficient of coincidence increased from ĉ = 0.43 to 0.72 (P = 2.9 × 10–2) for cn-kn-sp, from 0.42 to 0.70 (P = 1.8 × 10–2) for cn-c-sp, and from 0.37 to 0.66 (P = 4.6 × 10–2) for cn-px-sp. In chromosome 3, no changes in rf or interference were found in hyperoxia-tolerant lines. Although no increase in rf in the lines selected for hypoxia tolerance was detected in the 3 L arm, we observed significant changes in interference in this arm: either considerable relaxation of strong positive interference (e.g. in h-th-sr region) or replacement of significant positive interference with no interference (e.g. in ru-h-th region). Relaxation of interference was also noted for non-adjacent intervals, including across-centromere effects: for the pair of intervals ru-th_cu-e, coefficient ĉ changed from 0.15 to 0.49 (P = 4.8 × 10–4). Table 4 Effect of hypoxia and hyperoxia selection on the coefficient of coincidence in adjacent intervals of the major chromosomes of D. melanogaster Between-replicate heterogeneity in recombination rates and changes in interference Analysis of 16 genomic intervals showed segment-specific increases in recombination rate and relaxation of positive interference, or even its replacement by negative interference in all three selection experiments compared to the corresponding controls. The question is whether the changes in interference, deduced using the estimates of coefficient of coincidence, represent a 'true' cytogenetic effect, or an alternative process? Säll and Bengtsson [63] demonstrated that, even in the absence of negative interference, heterogeneity of recombination rates within a sample, with positive covariation of rf values in two intervals, may lead to biased upward ĉ and even to ĉ values highly significantly exceeding c = 1, i.e. the false discovery of negative interference. To reduce the risk of such outcomes when combining potentially heterogeneous data from the replicated lines, we used a weighted maximum likelihood (ML) approach in addition to the standard ML approach (see Methods). A special analysis of data heterogeneity and correlation between rf values was performed to assess the possible effect on ĉ estimates. As explained in Methods, the segregating progeny of each of the three replicate control and selection lines were obtained in three bottles each with approximately 250 flies (Additional file 2: Figure S1). Although each such trio of sub-samples represents the same selection or control line, analyzing them separately enables taking into account one additional source of variation on post-meiotic stages (differential survival of the progeny) that might affect the rf estimates. The small size of these sub-samples (n = 250) precludes the possibility of interference analysis for a considerable portion of the interval pairs on such a sub-replicate level, but linkage analysis is still possible. Thus, based on nine data points (three replicate lines × three bottles of backcross segregants per line), we could calculate the correlation between rf values for pairs of intervals, either adjacent and non-adjacent (Additional file 1: Tables S2 and S3 for desiccation and hypoxia/hyperoxia selection experiments, respectively). For each of the three experiments, the following question can be addressed: is there any association between a significant change of c in selection material for certain interval pairs and a significant positive correlation between rf values for the same interval pairs? The analysis (Additional file 1: Table S4a) suggests that this factor does not explain the cases of significant increase of c values in selection lines in any of the three selection experiments. Likewise, cases with significant increase of c in selection lines do not show strong associations with significant increase in rf in one or both segments (Additional file 1: Table S4b,c and Additional file 6: Text S1). Additional observations on negative interference In all three experiments, the most pronounced changes toward negative interference were observed in the X chromosome. In the 2 L arm, negative interference appeared in a number of cases in hypoxia- and hyperoxia-tolerant lines, while desiccation-tolerant lines manifested only a reduction in positive interference. In the 2R arm, desiccation- and hyperoxia-tolerant lines showed relaxation of positive interference, while no such effect was observed in the hypoxia-tolerant lines. In some intervals of chromosome 3, negative interference appeared in the desiccation-tolerant lines, while only relaxation of positive interference was observed in hypoxia-tolerant lines, and there was no effect in hyperoxia-tolerant lines. We also found that negative interference caused by selection can be accompanied by a decrease in rf over long intervals compared to controls. Thus, rf values along the X chromosome in hypoxia-tolerance selection lines significantly exceeded corresponding control values (Table 3): 17.46 vs. 13.15 in y-cv, 26.29 vs. 18.96 in cv-v, and 28.98 vs. 22.57 in v-f. Nevertheless, the rf y-f value in control lines was significantly higher than that in selection lines: 38.58 vs. 26.66 (Additional file 3). Similarly, a higher rf value for y-f in control lines compared to selection lines was observed in the hyperoxia experiment (38.58 in control vs. 27.77 in selection) and desiccation experiment (39.26 in control vs. 31.46 in selection), as well as for net-cn of the 2L arm in hypoxia and hyperoxia experiments (45.64 in control vs. 35.84 in hypoxia and 39.68 in hyperoxia; Additional file 3). This non-monotonicity can be explained by the increased chance of double-recombination events observed among the component shorter intervals (see the corresponding estimates of c in Additional file 1: Tables S2 and S3). Our results of non-monotonicity raises an important point about marker spacing in experimental recombination-evolution studies: by choosing intervals that are too long, one might actually observe rf values that remain the same or appear to decrease even if the map length has truly increased (due to the increased chance of double crossovers). This comment may be relevant for some of the long intervals used in the previous studies and the current data. We estimated genomic changes in recombination in D. melanogaster caused by long-term selection (50–200 generations) for tolerance to desiccation, hypoxia, and hyperoxia. Using the same sets of markers, we provide robust evidence of indirect selection for recombination in all three experiments. We found that long-term selection has resulted in a dramatic increase in recombination rates in different genomic regions (up to 40–50 % per interval) relative to control levels. A higher response was displayed by the X chromosome compared to autosomes in all the three experiments. No significant reduction in rf was observed in any of the 16 genomic intervals analyzed, for any of the three experiments. Remarkably, in addition to the unidirectional changes in rf, we observed a highly significant increase in the rate of double crossovers, expressed as relaxation of positive interference and occurrence of negative interference. Relaxation of positive interference was evident for all tested chromosomes in all three experiments, whereas the intervals with selection-induced negative interference differed between the selection regimes. Crossover interference as an evolving phenotype A comparison of meiotic mutants and normal genotypes leads to the conclusion that the genomic distribution of crossover exchanges in normal meiosis is more restricted and less proportional to physical distances than in meiosis altered by mutations [64–66]. Thus, these restrictions may be largely a result of evolutionary adjustments of crossover distribution along the chromosome. Relaxation of (positive) interference in meiotic mutants has also been observed, despite a general tendency toward linkage tightening [67–69]. Such effects (crossover re-distribution along the chromosome and relaxation of interference) were also displayed by mei-mutants with increased recombination rates [70]. Such observations suggest that the direction and level of interference are evolvable phenotypes. A first, formal analysis of interference-modifier evolution was conducted by Goldstein et al. [61]. Using numerical analysis, they showed that in an overdominance selection model, interference modifiers evolve to reduce the overall recombination rate, whereas in a mutation-selection balance model interference can evolve toward an overall increase in recombination if fitness effects of the selected loci are super-multiplicative. However, there has been no evidence available to date showing changes in interference in evolution experiments. Our results indicate that long-term directional selection for recombination-unrelated traits may lead not only to an increase in recombination rates, but also to relaxation of positive interference and appearance of negative interference. Alternative explanations for the obtained results The repeatable observation of association between directional selection and increased recombination implies selection for rec modifiers [12, 71, 72] or changes in the respective genomic regions' ability to recombine. These two mutually non-exclusive scenarios can be considered as changes in regulating and reacting systems of the hierarchical control of recombination [12, 72]. The fixation of polymorphic recombination hotspot motifs can serve as an example of changes in the reacting system. Selection pressure may also strengthen the ability to recombine if the initial material was heterozygous for small inversions and evolved toward structural homozygosity due to selection and drift. However, despite our growing understanding of the importance of structural heterozygosity in population-genetic experiments with D. melanogaster [73], this assumption cannot explain the reproducibility of the observed patterns among replicates and the similarity of rf values between the controls for desiccation and hypoxia/hyperoxia experiments, as well as their good correspondence with the standard D. melanogaster genetic map. More importantly, this assumption is also incompatible with exclusively upward changes in rf in all selection lines (Tables 1 and 3). Another assumption, that the increase in rf was caused by initial positive LDs between the advantageous alleles conferring resistance (to desiccation, hypoxia, or hyperoxia) and recombination alleles increasing rf, is also improbable. Indeed, this assumption implies a prevalence of cis-regulation of recombination for all intervals that showed a selection-induced increase in rf; it also requires a further assumption of uniformity of signs of such LDs. Moreover, this explanation contradicts our findings of unidirectional changes of rf in both hypoxia and hyperoxia selection experiments, similar to earlier findings of unidirectional changes of rf in two-way selection for geotaxis [51]. Theoretical analysis shows that the fitness epistasis caused by truncation selection with a steadily moving optimum can have a powerful effect on selection for increased recombination in large populations [18]. An alternative mechanism is fluctuation of LD in small populations combined with directional selection, which may also lead to higher recombination [19]. In the present study, we observed increased recombination in three independent replicates of each selection experiment – for desiccation, hypoxia, and hyperoxia tolerance. Presumably, both abovementioned mechanisms could play a role in the observed changes in recombination. However, although selection × drift interaction may be an important factor contributing to the evolutionary advantage of increased recombination, the high uniformity of the replicates enables us to suggest that directional selection with a steadily moving optimum has played a leading role in the observed recombination response. As shown by Charlesworth [18], selection pressure on a rec-modifier when a trait is subject to selection with a steadily moving optimum should be sufficient to account for observed increases in rf in artificial selection experiments, especially for organisms with small chromosome number, like D. melanogaster. The observed pattern of recombination changes across the genome induced by selection for traits unrelated to recombination does not necessarily adequately reflect the distribution of loci affecting those traits. Flexon and Rodell [48] did find such a correspondence in their pioneering study of the effect of selection for resistance to DDT on recombination in D. melanogaster and revealed a positive correlation between the chromosome contribution to resistance and the extent of change in rf relative to the control level. It is worth noting that experiments involving direct selection for changed recombination have shown that selection for rf in one region can result in a spectrum of correlated changes in other regions with different chromosomes being involved in this changed control of recombination [39, 41, 45]. Concerning our results, out of 188 genes residing in hypoxia-tolerance selected regions [74], 44 are located on the 3R arm and 144 on the X chromosome; 10 of these genes from 3R and 52 from X belong to the intervals with observed significant increases in rf (y-f for X and th-e for 3R). To evaluate whether the increase in rf is coordinated with the selection of new combinations of alleles of relevant tolerance genes, these results should be complemented with fine-scale assays of recombination landscapes and genome scanning for footprints of selection. This would enable testing whether alterations in the recombination system caused by long-term selection include a change in the 'spectrum of recombinants', i.e. involvement in crossover exchanges in genomic regions that were excluded from crossing-over in controls [12, 68], or simply reflect a quantitative increase in rf. Presumably, episodes of novel intensive selection pressures are not uncommon in nature [14, 15, 75]. As noted by Barton [14], "…it remains possible that local populations experience far more directional selection, and that it is this which sustains widespread sex and recombination". D. melanogaster is one of the organisms that, at least outside of its native habitats in Africa, seems to undergo boom-bust cycles, dramatically reducing the long-term effective population size and allowing adaptation in the boom years to occur in populations of large short-term effective population size, enabling short-term evolution to act primarily on pre-existing intermediate-frequency genetic variants that are driven the rest of the way to fixation via soft sweeps [76, 77]. The results of the current study indicate that selection for stress tolerance can lead to a considerable increase in the level of recombination and also deeply modify such basic features of recombination as crossover interference, displayed by relaxation of positive interference, and even evolution of negative interference. Till now, theoretical studies of recombination evolution have been concentrated on the central question of 'why sex and recombination?', ignoring the fact that several important features of recombination also remain unexplained, including its environmental dependence, widespread occurrence of crossover interference, sex differences in rf, and its species-specificity, to name just a few ([12, 71, 78]; but see [12, 79, 80]). Comparative analysis of recombination in ecologically divergent populations and assessment of changes in recombination in selection experiments may serve as an important source of evidence for better understanding of the mechanisms of maintenance of sexual recombination and explaining why recombination is so variable within and between species. Three sets of D. melanogaster lines resulting from long-term directional selection for stress tolerance were employed in our experiments: (1) three desiccation-resistant lines established by selection over 48 generations; (2) three lines tolerant to severe hypoxic stress generated through long-term experimental selection (for more than 200 generations), and (3) three hyperoxia-tolerant lines. Details of the experimental scheme for hypoxia-tolerance selection were provided elsewhere [81, 82]. Peculiarities of the selection for hyperoxia tolerance are described by Zhao et al. [83]. Selection for desiccation tolerance was performed by DDA. Selection for desiccation tolerance Wild individuals of D. melanogaster (n = 120) were collected in March 2009 from Madhya Pradesh, Jabalpur, India (23°30'N; 80°01'E; alt. 393 m). Before the start of the selection experiment, mass culture was maintained for five generations under standard laboratory conditions at low density (on yeast-cornmeal-agar medium at 21 °C, and ~70 % relative humidity) to eliminate environmental effects. For laboratory selection, virgin flies were sexed under CO2 anesthesia at least 48 h prior to the experiment. Then, virgin flies (3–4 days old) were placed in groups of 25 into plastic vials containing 2 g of silica gel and covered with foam discs. Experiments were conducted for males and females separately. Flies were subjected to desiccation stress until approximately LT70–LT85 level of mortality was reached. Control groups were established in the same manner, excluding water stress. In each generation, we examined approximately 1,000 virgin flies of each sex per replicate, of which at least 100 males and 100 females survived the LT70–85 cut-off to become the parents of the next generation. For each group (selection and control), survivors were randomly allocated into three sub-groups (three replicates). The same protocol was repeated for 48 generations (each next generation was subjected to analogous treatment), and then selection was relaxed for 8–10 generations before initiating the recombination tests. The control lines were not subjected to any treatment and were maintained in comparable densities to the selection lines on standard media. In the present study, we used three control and three desiccation-resistant lines for recombination tests. Average desiccation tolerance of the initial population was 14.8 h and 23.2 h (with SD = 2.88 and 3.44), for males and females, respectively. After 48 generations of selection, these tolerance characteristics increased to 25.3 h and 43.6 h for males and females, respectively, i.e. 3.65 SDs and 5.93 SDs compared to the starting population. Hypoxia- and hyperoxia-tolerant lines Selection for hypoxia/hyperoxia tolerance was initiated after crossing 27 isofemale D. melanogaster lines (kindly provided by Dr. Andrew Davis), that varied considerably in acute anoxia test as well as for eclosion rates when cultured under hypoxic or hyperoxic conditions. Males and virgin females (n = 20) were collected and pooled from each isofemale line. This parental population was reared at room temperature with standard food medium. F1 embryos from the pooled population were separated and maintained in nine separate chambers, three each for control, hypoxia- and hyperoxia-selection experiments. Trial experiments were run to determine the starting O2 concentrations for hypoxia- and hyperoxia-tolerance selection. We analyzed the feasibility and tolerance capacity of the F1 progeny of the parental cross to different O2 concentrations (i.e. 8, 6, or 4 % O2 for hypoxia selection and 60 %, 70 %, 80 % and 90 % O2 for hyperoxia selection). In addition, the tolerance levels of each parental line to hypoxia or hyperoxia were measured by testing survival of each individual line in the hypoxic or hyperoxic environments. In the pilot study, the selection for hypoxia tolerance was therefore started at 8 % O2 and for hyperoxia tolerance at 60 % O2. The low O2 concentration was gradually decreased by 1 % and the high O2 was increased by 10 % every 3 to 5 generations to maintain the selection pressure. The population size was kept at around 2,000 flies in each generation. Eggs of the first egg laying for each generation were removed to limit genetic drift induced by the 'early-bird' effect. After seven generations of selection, hyperoxia tolerance was increased to 80 % O2, and after 13 generations the hypoxia tolerance in the hypoxia-selected flies reached 5 %, a level that is lethal for most of the control flies (Additional file 2: Figure S3). The hyperoxia-selected flies broke through the lethal hyperoxic level (90 % O2) after 13 generations of selection, and the hypoxia-selected flies exhibited tolerance to a severe level of hypoxia (4 % O2, embryonic lethal to control flies) following 32 generations of selection. The lethality in these selection experiments was defined as the level of oxygen in which D. melanogaster cannot complete development and reproduce. Genetic crosses Virgin females (3 days post-eclosion) of each control and selection lines (three replicate lines each for control and selection groups) were allowed to mate with males of marker stocks (Additional file 2: Figure S1). Four marker stocks were employed (Additional file 2: Figure S2): y cv v f for the X chromosome; net dp b pk cn for the 2 L arm, cn kn c px sp for the 2R arm, and ru h th cu sr e for chromosome 3. F1 heterozygous virgin females were collected for each replicate line, and thereafter test-crossed with marker males. Because maternal age may also influence rf in D. melanogaster, we reduced this effect by allowing the 50- to 60-hour old (post-eclosion) F1 virgin females to mate with marker males for approximately 48 hours. To obtain a sufficient number of flies per replicate for scoring recombination, each replicate line was divided into three sub-replicates before the start of recombination experimentation. In this panel, we scored recombination in nine sub-replicates of three replicate lines each for control and selection. In the desiccation experiment, we scored 1,050 individuals of each replicate line (or 350 individuals per sub-replicate), i.e. a total 6,300 flies were counted for estimation of rf at the X chromosome. We scored 750 individuals of each replicate line (or 250 individuals per sub-replicate), i.e. 4,500 individuals each were scored for arms 2 L and 2R and chromosome 3. A total of 19,800 flies were counted for estimation of rf in the desiccation-selection experiment. Similarly, 750 flies per line, or a total 27,000 flies, were scored for rf in the hypoxia/hyperoxia experiments. In the three experiments, we scored a total of 46,800 individuals. For each pair of intervals and each of the three control or selection lines, ML analysis was performed to estimate the recombination frequencies r 1k and r 2k together with the coefficient of coincidence c k (k = 1,2,3). For a pair of intervals, either adjacent or non-adjacent, the log-likelihood function had the following form: $$ \log \left(L\left(r{1}_k,r{2}_k,{c}_k\right)\right)=\sum_{ij,k}{n}_{ij,k} \log \left({p}_{ij,k}\left(\kern0.10em r{1}_k,r{2}_k,{c}_k\right)\right) $$ where i, j ϵ {0, 1} define whether the recombination event occurred in the first or second interval, respectively (0 – no recombination, 1 – recombination), k denotes the replicate line, and p ijk and n ijk represent the probability and the observed number of individuals of the genotype class ij in replicate line k in the backcross progeny (within control or selection). The frequencies for the four genotype classes were defined as: $$ \begin{array}{l}{p}_{11,k}=\left(r{1}_kr{2}_k{c}_k\right),\kern1em \\ {}{p}_{01,k}=r{2}_k\left(1-r{1}_k{c}_k\right),\kern1em \\ {}{p}_{01,k}=r{1}_k\left(1-r{2}_k{c}_k\right),\kern1em \\ {}{p}_{00,k}=\left(1-r{2}_k-r{1}_k+r{1}_kr{2}_k{c}_k\right).\kern1em \end{array} $$ The ML estimate \( \widehat{{\boldsymbol{\uptheta}}_k} \) of the vector θ k = (r1 k , r2 k , c k ) for k = 1,2,3 was obtained by numerical optimization of the log-likelihood function L (θ k ), using the gradient-descent procedure in which all three parameters r1 k , r2 k and c k are evaluated simultaneously in every iteration: $$ \begin{array}{l}r{1}_{n+1,k}=r{1}_{n,k}-{\alpha}_{n+1}\frac{\partial \mathrm{L}\left({\boldsymbol{\uptheta}}_{\boldsymbol{\kappa}}\right)}{\partial r{1}_k}\\ {}r{2}_{n+1,k}=r{2}_{n,k}-{\alpha}_{n+1}\frac{\partial \mathrm{L}\left({\boldsymbol{\uptheta}}_{\boldsymbol{\kappa}}\right)}{\partial r{2}_k}\\ {}{c}_{n+1,k}={c}_{n,k}-{\alpha}_{n+1}\frac{\partial \mathrm{L}\left({\boldsymbol{\uptheta}}_{\boldsymbol{\kappa}}\right)}{\partial {c}_k}\end{array} $$ where n refers to iteration number, k to the line (within control or selection), and α to the step size. The variances of the estimated parameters r 1k , r 2k , c k were calculated as corresponding diagonal elements of the covariance matrix V k = I −1(\( {\widehat{\boldsymbol{\uptheta}}}_k \) ) = I k −1, where I is the Fisher's information matrix [54]. The estimates of the parameter vector Θ = (r 1, r 2, c) for the entire group (control or selection) together with the vector V Θ of their variances, were obtained as: $$ \widehat{\boldsymbol{\Theta}}=\frac{{{\displaystyle {\sum}_i\boldsymbol{I}}}_i\widehat{{\boldsymbol{\uptheta}}_{\boldsymbol{\upiota}}}}{{{\displaystyle {\sum}_i\boldsymbol{I}}}_i}\kern0.24em \mathrm{and}\;{\mathbf{V}}_{\varTheta }={\left({\displaystyle {\sum}_i{\mathbf{I}}_i}\right)}^{-1} $$ This approach enables tests of the heterogeneity of the lines within selection and control groups, across the entire set of selection and control lines, and between selection and control groups, with respect to the estimated parameters. To assess the heterogeneity of \( \widehat{{\boldsymbol{\uptheta}}_k} \) estimates of all three parameters (r1 k , r2 k c k ) in k lines we can use the following statistics that is asymptotically distributed as χ 2 with 3(k-1) degrees of freedom: $$ {X}_3^2\left(k-1\right)={\displaystyle \sum_m{\left(\widehat{\varTheta}-\widehat{{\boldsymbol{\uptheta}}_m}\right)}^T{I}_m}\left(\widehat{\varTheta}-\widehat{{\boldsymbol{\uptheta}}_m}\right) $$ To assess heterogeneity of a single parameter p in k lines the following statistics asymptotically distributed as χ 2 with df = k-1 can be used: $$ {X}_{k-1}^2 = {\displaystyle \sum_m}\frac{{\left(\widehat{\varTheta}-\widehat{{\boldsymbol{\uptheta}}_m}\ \right)}^2}{\upsigma_{pm}^2} $$ where \( \widehat{{\boldsymbol{\uptheta}}_k} \) is the ML-estimate of θ k , σ pk 2 is the squared standard error of parameter p in the k th line, and \( \widehat{\varTheta} \) is the weighted mean of \( \widehat{{\boldsymbol{\uptheta}}_k} \). Using this weighted likelihood approach, we can present the total heterogeneity of \( \widehat{{\boldsymbol{\uptheta}}_k} \) across all lines of control and selection groups as: $$ {X^2}_{\mathrm{total}\ \left(\mathrm{control}+\mathrm{selection}\right)} = {X^2}_{\mathrm{within}\ \left(\mathrm{control}\right)}+{X^2}_{\mathrm{within}\ \left(\mathrm{selection}\right)}+{X^2}_{\mathrm{between}\ \left(\mathrm{control}\ \mathrm{v}\mathrm{s}.\ \mathrm{s}\mathrm{election}\right).} $$ Thus, the significance of the difference between selection and control lines can be tested using the statistics: $$ {X^2}_{\mathrm{between}\ \left(\mathrm{control}\ \mathrm{v}\mathrm{s}.\ \mathrm{s}\mathrm{election}\right)} = {X^2}_{\mathrm{total}\ \left(\mathrm{control}+\mathrm{selection}\right)}\mathit{\hbox{-}}{X^2}_{\mathrm{within}\ \left(\mathrm{control}\right)}\hbox{-} {X^2}_{\mathrm{within}\ \left(\mathrm{selection}\right)} $$ which is distributed approximately as χ 2 with df = 1 upon H0{no difference between the compared groups (selection vs. control) for the parameter p}. The importance of using this approach in testing the differences in interference derives from the fact that heterogeneity of recombination rates within the sample (e.g. between replicate lines of the selection group), with positive co-variation of recombination rates in two intervals, may lead to biased upward estimates of c and even c >1 [63]. Therefore, to reduce the danger of such outcomes while testing for significance between control and selection lines in each of the three experiments, we employed, wherever possible, the weighted ML estimates of recombination (Additional file 3) and interference (Additional files 4 and 5) parameters in weighted likelihood approach, in addition to the standard ML approach (see below). However, where \( \widehat{\theta_c} \) , the estimate of c, was zero in one or more of the three control or selection lines, its standard error was also zero, thereby overweighting the estimates of c from the other two lines and leading to zero weighted average per selection or control. Thus, for all the data we also employed the standard and more direct ML approach allowing for each line, in both selection and control, to have its own r1 k and r2 k . Namely, to test for significance of the differences of c values in selection and control, we performed log-likelihood ratio test of H0 {one global c for all selected and control lines} versus H1 {two c's, one for all selected lines and one for all control lines}: H1 : {Θ control = (r 1c, r 2c, c c), Θ selection = (r 1s, r 2s, c s)} vs. H0 : {Θ control = (r 1c, r 2c, c), Θ selection = (r 1c, r 2c, c)}, where pairs of vectors r 1c and r 2c represent the unknown rf values for the analyzed pair of intervals for the three control lines, r 1s and r 2s – the vectors of rf values for the three selection lines, c c and c s – the line-independent values of coefficients of coincidence for control and selection groups, and c g – the global c under the H0 assumption that c s = c c . Therefore, the H0 and H1 hypotheses are specified by 14 and 13 parameters and the log-likelihood ratio test of H1 versus H0 is asymptotically distributed as χ 2 with df = 1. The obtained P values (for two-tailed test) were subjected to false discovery rate correction for multiple comparisons before demonstrations in tables, figures and text. For false discovery rate correction, we used a total 48 comparisons across three experiments (with 16 intervals in each) for the recombination rates, while 189 comparisons for the interference estimates. Coefficient of coincidence LD: Linkage disequilibrium Maximum likelihood Recombination frequency Burt A. Sex, recombination and the efficacy of selection – was Weismann right? Evolution. 2000;54:337–51. Fisher RA. The genetical theory of natural selection. Oxford: Oxford University Press; 1930. Muller JH. Some genetic aspects of sex. Am Nat. 1932;66:118–38. Felsenstein J, Yokoyama S. The evolutionary advantage of recombination. Individual selection for recombination. Genetics. 1976;83:845–59. Kondrashov A. Deleterious mutations and the evolution of sexual reproduction. Nature. 1998;336:435–40. Charlesworth B. Mutation-selection balance and the evolutionary advantage of sex and recombination. Genet Res. 1990;55:199–221. Charlesworth B, Campos JL. The Relations between recombination rate and patterns of molecular variation and evolution in Drosophila. Annu Rev Genet. 2014;48:383–403. Charlesworth B. Recombination modification in a fluctuating environment. Genetics. 1976;83:181–95. Lenormand T, Otto SP. The evolution of recombination in a heterogeneous environment. Genetics. 2000;156:423–38. Bell G, Maynard SJ. Short-term selection for recombination among mutually antagonistic species. Nature. 1987;328:66–8. Carja O, Liberman U, Feldman MW. Evolution in changing environments: modifiers of mutation, recombination, and migration. Proc Natl Acad Sci U S A. 2014;111:17935–40. Korol AB, Preygel IA, Preygel SI. Recombination variability and evolution. London: Chapman & Hall; 1994. Hill WG, Robertson A. The effect of linkage on limits to artificial selection. Genet Res. 1966;8:269–94. Barton NH. Genetic linkage and natural selection. Phil Trans R Soc B. 2010;365:2559–69. Barton NH. Mutation and the evolution of recombination. Phil Trans R Soc B. 2010;365:1281–94. Charlesworth B, Betancourt A, Kaiser VB, Gordo I. Genetic recombination and molecular evolution. Cold Spring Harb Symp Quant Biol. 2009;74:177–86. Campos JL, Halligan DL, Haddrill PR, Charlesworth B. The relation between recombination rate and patterns of molecular evolution and variation in Drosophila melanogaster. Mol Biol Evol. 2014;31:1010–28. Charlesworth B. Directional selection and evolution of sex and recombination. Genet Res. 1993;61:205–24. Barton NH, Otto SP. Evolution of recombination due to random drift. Genetics. 2005;169:2353–70. Roze D, Barton NH. The Hill–Robertson effect and the evolution of recombination. Genetics. 2006;173:1793–811. Barton NH. Why sex and recombination? Cold Spring Harbor Symposia Quant Biol. 2009;74:187–95. Rice WR, Chippendale AK. Sexual recombination and the power of natural selection. Science. 2001;294:555–9. Bachtrog D, Charlesworth B. Reduced adaptation of a non-recombining neo-Y chromosome. Nature. 2002;416:323–6. Colegrave N. Sex releases the speed limit on evolution. Nature. 2002;420:664–6. Goddard MR, Godfray HC, Burt A. Sex increases the efficacy of natural selection in experimental yeast populations. Nature. 2005;434:636–40. Betancourt AJ, Welch JJ, Charlesworth B. Reduced effectiveness of selection caused by a lack of recombination. Curr Biol. 2009;19:655–60. Williford A, Comeron JM. Local effects of limited recombination: historical perspective and consequences for population estimates of adaptive evolution. J Heredity. 2010;101 Suppl 1:S127–34. Langley CH, Stevens K, Cardeno C, Lee YCG, Schrider DR, Pool JE, et al. Genomic variation in natural populations of Drosophila melanogaster. Genetics. 2012;192:533–98. McGaugh SE, Heil CSS, Manzano-Winkler B, Loewe L, Goldstein S, Himmel TL, et al. Recombination modulates how selection affects linked sites in Drosophila. PLoS Biol. 2012;10, e1001422. Comeron JM. Background selection as baseline for nucleotide variation across the Drosophila genome. PLoS Genet. 2014;10, e1004434. Thompson V. Recombination and response to selection in Drosophila melanogaster. Genetics. 1977;85:125–40. Zeyl C, Bell G. The advantage of sex in evolving yeast populations. Nature. 1997;388(6641):465–8. Bourguet D, Gair J, Mattice M, Whitlock MC. Genetic recombination and adaptation to fluctuating environments: selection for geotaxis in Drosophila melanogaster. Heredity. 2003;91:78–84. Bullaughey K, Przeworski M, Coop G. No effect of recombination on the efficacy of natural selection in primates. Genome Res. 2008;18:544–54. Webster MT, Hurst LD. Direct and indirect consequences of meiotic recombination: implications for genome evolution. Trend Genet. 2012;28:102–9. Hurst LD, Peck JR. Recent advances in understanding of the evolution and maintenance of sex. Trends Ecol Evol. 1996;11:46–53. Brooks LD, Marks RW. The organization of genetic variation for recombination in Drosophila melanogaster. Genetics. 1986;114:525–47. Allard RW. Evidence for genetic restriction of recombination in the lima bean. Genetics. 1963;48:1389–95. Chinnici JP. Modification of recombination frequency in Drosophila. II. The polygene control of crossing over. Genetics. 1971;69:85–96. Landner L. Genetic control of recombination in Neurospora crassa: correlated regulation in unlinked chromosome intervals. Heredity. 1971;27:385–92. Kidwell MG. Genetic change of recombination value in Drosophila melanogaster. I. Artificial selection for high and low recombination and some properties of recombination modifying genes. Genetics. 1972;70:419–32. Shaw DD. Genetic and environmental components of chiasma control. II. The response to selection in Schistocerca. Chromosoma. 1972;37:297–308. Dewees AA. Genetic modification of recombination rate in Tribolium castaneum. Genetics. 1975;81:537–52. Turner JRG. Genetic control of recombination in the silkworm. Multigenic control of chromosome 2. Heredity. 1979;43:273–93. Charlesworth B, Charlesworth D. Genetic variation in recombination in Drosophila. II. Genetic analysis of a high recombination stock. Heredity. 1985;54:85–98. Barton NH. Linkage and the limits to natural selection. Genetics. 1995;140:821–41. Lobashev ME, Ponomarenko VV, Polyanskaya GG, Tsapygina RI. On the role of nervous system in regulation of various genetic and cytological processes. J Evol Biochem (USSR). 1973;9:398–405. Flexon PB, Rodell CF. Genetic recombination and directional selection for DDT resistance in Drosophila melanogaster. Nature. 1982;298:672–5. Zhuchenko AA, Korol AB, Kovtyukh LP. Change of crossing-over frequency in Drosophila during selection for resistance to temperature fluctuations. Genetica. 1985;67:73–8. Gorodetsky VP, Zhuchenko AA, Korol AB. Efficiency of feedback selection for recombination in Drosophila. Genetika (USSR). 1990;26:1942–52 (in Russian). Korol AB, Iliadi KG. Recombination increase resulting from directional selection for geotaxis in Drosophila. Heredity. 1994;72:64–8. Rodell CF, Schipper MR, Keenan DK. Modes of selection and recombination response in Drosophila melanogaster. J Heredity. 2004;95:70–5. Otto SP, Barton NH. Selection for recombination in small populations. Evolution. 2001;55:1921–31. Bailey NTJ. Mathematical theory of genetic linkage. Amen House, London: Oxford Univ. Press; 1961. Berchowitz LE, Copenhaver GP. Genetic Interference: don't stand so close to me. Curr Genomics. 2010;11:91–102. Loidl J, Scherthan H. Organization and pairing of meiotic chromosomes in the ciliate Tetrahymena thermophila. J Cell Sci. 2004;117:5791–801. Sinclair DA. Crossing over between closely linked markers spanning the centromere of chromosome 3 in Drosophila melanogaster. Genet Res. 1975;11:173–85. Green MM. Conversion as a possible mechanism of high coincidence values in the centromeric region of Drosophila. Mol Gen Genet. 1975;39:57–66. Denell RE, Keppy DO. The nature of genetic recombination near the third chromosome centromere of Drosophila melanogaster. Genetics. 1979;93:117–30. Lake S. Recombination frequencies and the coincidence in proximal X-chromosome regions including heterochromatin in Drosophila melanogaster. Hereditas. 1986;105:263–8. Goldstein DB, Bergman A, Feldman MW. The evolution of interference: reduction of recombination among three loci. Theor Pop Biol. 1993;44:246–59. Wang S, Zickler D, Kleckner N, Zhang L. Meiotic crossover patterns: obligatory crossover, interference and homeostasis in a single process. Cell Cycle. 2015;14:305–14. Säll T, Bengtsson BO. Apparent negative interference due to variation in recombination frequencies. Genetics. 1989;122:935–42. Lindsley DL, Sandler L. The genetic analysis of meiosis in female Drosophila melanogaster. Phil Trans Roy Soc Lond B. 1977;277:295–312. Szauter P. An analysis of regional constraints on exchange in Drosophila melanogaster using recombination-defective meiotic mutants. Genetics. 1984;100:45–71. Zetka MC, Rose AM. Mutant rec-1 eliminates the meiotic pattern of crossing over in Caenorhabditis elegans. Genetics. 1995;141:1339–49. Baker BS, Hall JC. Meiotic mutants: genie control of meiotic recombination and chromosome segregation. In: Ashburner M, Novitski E, editors. The Genetics and Biology of Drosophila, Vol 1a. New York: Academic; 1976. p. 351–434. Zhuchenko AA, Korol AB. Recombination in evolution and Breeding. Moscow: Nauka; 1985. In Russian. Bhagat R, Manheim EA, Sherizen DE, McKim KS. Studies on crossover specific mutants and the distribution of crossing over in Drosophila females. Cytogenet Gen Res. 2004;107:160–71. Séguéla-Arnaud M, Crismani W, Larchevêque C, Mazel J, Froger N, Choinard S, et al. Multiple mechanisms limit meiotic crossovers: TOP3α and two BLM homologs antagonize crossovers in parallel to FANCM. Proc Natl Acad Sci. 2015;112:4713–8. Korol AB. Selection for adaptive traits as a factor of recombination evolution: Evidence from natural and experimental populations. In: Wasser SP, editor. Evolutionary theory and processes: modern perspective. Dordrecht: Kluwer; 1999. p. 31–53. Korol AB. Recombination. In: Levin SA, editor. Encyclopedia of Biodiversity, vol. 6. 2nd ed. Waltham: Academic Press; 2013. p. 353–69. Tobler R, Franssen SU, Kofler R, Orozco-Terwengel P, Nolte V, Hermisson J, et al. Massive habitat-specific genomic response in D. melanogaster populations during experimental evolution in hot and cold environments. Mol Biol Evol. 2015;31(2):364–75. Zhou D, Udpa N, Gersten M, Visk DW, Bashir A, Xue J, et al. Experimental selection of hypoxia-tolerant Drosophila melanogaster. Proc Natl Acad Sci U S A. 2011;108:2349–54. Becks L, Agrawal AF. The evolution of sex is favoured during adaptation to new environments. PLoS Biol. 2012;10, e1001317. Burke MK, Dunham JP, Shahrestani P, Thornton KR, Rose MR, Long AD. Genome-wide analysis of a long-term evolution experiment with Drosophila. Nature. 2010;467:587–90. Karasov T, Messer PW, Petrov DA. Evidence that adaptation in Drosophila is not limited by mutation at single sites. PLoS Genet. 2010;6k:e1000924. Butlin RK. Recombination and speciation. Mol Ecol. 2005;14:2621–35. Zhuchenko AA, Korol AB, Preigel IA, Bronstein SI. The evolutionary role of the dependence of recombination on environment. Theor Appl Genet. 1995;69:617–24. Lenormand T. The evolution of sex dimorphism in recombination. Genetics. 2003;163:811–22. Zhou D, Xue J, Chen J, Morcillo P, Lambert JD, White KP, et al. Experimental selection for Drosophila survival in extremely low O(2) environment. PLoS One. 2007;2(5), e490. Zhou D, Xue J, Lai JC, Schork NJ, White KP, Haddad GG. Mechanisms underlying hypoxia tolerance in Drosophila melanogaster: hairy as a metabolic switch. PLoS Genet. 2008;4(10), e1000221. Zhao HW, Zhou D, Nizet V, Haddad GG. Experimental selection for Drosophila survival in extremely high O2 environments. PLoS One. 2010;5, e11701. We acknowledge with thanks the three reviewers and Graham Bell for their helpful comments and suggestions. We also thank Zeev Frenkel for productive discussions and help in computer simulations. The study was supported by Binational USA-Israeli Science Foundation (grant BSF # 2011438) and Postdoctoral fellowship for DDA from the Israeli Council for Higher Education and University of Haifa. Institute of Evolution, University of Haifa, Haifa, 3498838, Israel Dau Dayal Aggarwal , Eugenia Rashkovetsky , Irit Cohen , Yefim Ronin & Abraham B. Korol Virginia Bioinformatics Institute, Virginia Tech, Washington Street, MC 0477, Blacksburg, VA, 24061-0477, USA Pawel Michalak University of California, San Diego, USA Dan Zhou & Gabriel G. Haddad Rady Children's Hospital, San Diego, USA Gabriel G. Haddad Search for Dau Dayal Aggarwal in: Search for Eugenia Rashkovetsky in: Search for Pawel Michalak in: Search for Irit Cohen in: Search for Yefim Ronin in: Search for Dan Zhou in: Search for Gabriel G. Haddad in: Search for Abraham B. Korol in: Correspondence to Abraham B. Korol. DDA conducted the selection for desiccation tolerance and the entire recombination experiments and participated in data analysis and preparation of the manuscript. ER participated in experiments and preparation of the manuscript. PM participated in the preparation of the manuscript. IC and YR developed the algorithms; IC performed the data analysis. GH and DZ conducted the selection for hypoxia and hyperoxia. AK conceived the recombination study, and participated in data analysis and preparation of the manuscript. All authors read and approved the final manuscript. Additional file 1: Table S1. A review of previous reports on indirect selection for recombination in Drosophila melanogaster. Table S2. Effect of desiccation selection on the coefficient of coincidence in adjacent and non-adjacent intervals of D. melanogaster. Table S3. Effect of hypoxia and hyperoxia selection on the coefficient of coincidence in adjacent and non-adjacent intervals of D. melanogaster. Table S4. Coincidence of interval-pair cases of changes in interference with significant positive correlation between rf values in the interval-pairs (a) and with significant increases in rf in desiccation, hypoxia, and hyperoxia selection variants (b). (PDF 486 kb) Additional file 2: Figure S1. Effects of selection for tolerance to desiccation, hypoxia, and hyperoxia stresses on recombination rates in Drosophila melanogaster: (a) general scheme; (b) a fragment of the flowchart for selected line D1 of the desiccation experiment. Figure S2. Marker lines employed in the study. For each chromosome, a separate line was employed, excluding chromosome 2, where two lines (a and b) were used. Figure S3. The schematic presentation of hypoxia/hyperoxia selection experiment (PDF 497 kb) Additional file 3: Estimates of recombination rates per replicate and entire variants (selection and control). (PDF 1054 kb) Estimates of the coefficients of coincidence for adjacent intervals per replicate and entire variants (selection and control). (PDF 2958 kb) Estimates of the coefficients of coincidence for non-adjacent intervals per replicate and entire variants (selection and control). (PDF 2103 kb) Text S1. Changes in recombination and interference are not necessarily directly coupled.(PDF 199 kb) Directional selection Evolution of interference Negative interference Positive interference
CommonCrawl
The diffusive model for Aedes aegypti mosquito on a periodically evolving domain Sitong Chen and Xianhua Tang , School of Mathematics and Statistics, Central South University, Changsha 410083, Hunan, China * Corresponding author: Xianhua Tang Received May 2018 Revised August 2018 Published January 2019 Fund Project: This work is partially supported by the Hunan Provincial Innovation Foundation for Postgraduate (No: CX2017B041) and the National Natural Science Foundation of China (No: 11571370) This paper is concerned with the following planar Schrödinger-Poisson system $ \left\{ \begin{array}{ll} -\triangle u+V(x)u+\phi u = f(x,u), \ \ \ \ x\in { \mathbb{R} }^{2},\\ \triangle \phi = u^2, \ \ \ \ x\in { \mathbb{R} }^{2}, \end{array} \right. $ $ V(x) $ $ f(x, u) $ are axially symmetric in $ x $ is asymptotically cubic or super-cubic in $ u $ . With a different variational approach used in [S. Cingolani, T. Weth, Ann. Inst. Henri Poincaré, Anal. Non Linéaire 33 (2016) 169-197], we obtain the existence of an axially symmetric Nehari-type ground state solution and a nontrivial solution for the above system. The axial symmetry is more general than radial symmetry, but less used in the literature, since the embedding from the space of axially symmetric functions to $ L^s( \mathbb{R} ^N) $ is not compact. Our results generalize previous ones in the literature, and some of new phenomena do not occur in the corresponding problem for higher space dimensions. Keywords: Schrödinger-Poisson system, logarithmic convolution potential, ground state solution, axially symmetric. Mathematics Subject Classification: Primary: 35J20; Secondary: 35Q55. Citation: Sitong Chen, Xianhua Tang. Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4685-4702. doi: 10.3934/dcdsb.2018329 A. Ambrosetti and D. Ruiz, Multiple bound states for the Schrödinger-Poisson problem, Commun. Contemp. Math., 10 (2008), 391-404. doi: 10.1142/S021919970800282X. Google Scholar A. Azzollini and A. Pomponio, Ground state solutions for the nonlinear Schrödinger-Maxwell equations, J. Math. Anal. Appl., 345 (2008), 90-108. doi: 10.1016/j.jmaa.2008.03.057. Google Scholar V. Benci and D. Fortunato, An eigenvalue problem for the Schrödinger-Maxwell equations, Topol. Methods Nonlinear Anal., 11 (1998), 283-293. doi: 10.12775/TMNA.1998.019. Google Scholar V. Benci and D. Fortunato, Solitary waves of the nonlinear Klein-Gordon equation coupled with Maxwell equations, Rev. Math. Phys., 14 (2002), 409-420. doi: 10.1142/S0129055X02001168. Google Scholar R. Benguria, H. Brezis and E. H. Lieb, The Thomas-Fermi-von Weizsäcker theory of atoms and molecules, Comm. Math. Phys., 79 (1981), 167-180. doi: 10.1007/BF01942059. Google Scholar I. Catto and P. L. Lions, Binding of atoms and stability of molecules in hartree and thomas-fermi type theories, Comm. Partial Differential Equations, 18 (1993), 1149-1159. doi: 10.1080/03605309308820967. Google Scholar G. Cerami and J. G. Vaira, Positive solutions for some non-autonomous Schrödinger-Poisson systems, J. Differential Equations, 248 (2010), 521-543. doi: 10.1016/j.jde.2009.06.017. Google Scholar J. Chen, S. T. Chen and X. H. Tang, Ground state solutions for the planar asymptotically periodic Schrödinger-Poisson system, Taiwanese J. Math., 21 (2017), 363-383. doi: 10.11650/tjm/7784. Google Scholar J. Chen, X. H. Tang and S. T. Chen, Existence of ground states for fractional Kirchhoff equations with general potentials via Nehari-Pohozaev manifold, Electron. J. Differ. Eq., 2018 (2018), Paper No. 142, 21 pp. Google Scholar S. T. Chen and X. H. Tang, Ground state sign-changing solutions for a class of Schrödinger-Poisson type problems in $\mathbb R^3$, Z. Angew. Math. Phys., 67 (2016), Art. 102, 18 pp. doi: 10.1007/s00033-016-0695-2. Google Scholar S. T. Chen and X. H. Tang, Nehari type ground state solutions for asymptotically periodic Schrödinger-Poisson systems, Taiwan. J. Math., 21 (2017), 363-383. doi: 10.11650/tjm/7784. Google Scholar S. T. Chen and X. H. Tang, Improved results for Klein-Gordon-Maxwell systems with general nonlinearity, Discrete. Contin. Dyn. Syst., 38 (2018), 2333-2348. doi: 10.3934/dcds.2018096. Google Scholar S. T. Chen and X. H. Tang, Ground state solutions for generalized quasilinear Schrödinger equations with variable potentials and Berestycki-Lions nonlinearities, J. Math. Phys., 59 (2018), 081508, 18pp. doi: 10.1063/1.5036570. Google Scholar S. Cingolani and T. Weth, On the planar Schrödinger-Poisson system, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 169-197. doi: 10.1016/j.anihpc.2014.09.008. Google Scholar G. Coclite, A multiplicity result for the nonlinear Schrödinger-Maxwell equations, Commun. Appl. Anal., 7 (2003), 417-423. Google Scholar T. D'Aprile and D. Mugnai, Solitary waves for nonlinear Klein-Gordon-Maxwell and Schrödinger-Maxwell equations, Proc. Roy. Soc. Edinburgh Sect. A, 134 (2004), 893-906. doi: 10.1017/S030821050000353X. Google Scholar M. Du and T. Weth, Ground states and high energy solutions of the planar Schrödinger-Poisson system, Nonlinearity, 30 (2017), 3492-3515. doi: 10.1088/1361-6544/aa7eac. Google Scholar E. H. Lieb, Thomas-fermi and related theories and molecules, Rev. Modern Phys., 53 (1981), 603-641. doi: 10.1103/RevModPhys.53.603. Google Scholar E. H. Lieb, Sharp constants in the Hardy-Littlewood-Sobolev inequality and related inequalities, Ann. of Math., 118 (1983), 349-374. doi: 10.2307/2007032. Google Scholar P. Markowich, C. Ringhofer and C. Schmeiser, The concentration-compactness principle in the calculus of variations. the locally compact case. Ⅰ & Ⅱ, Ann. Inst. H. Poincaré Anal. Non Linéaire, 1 (1984), 223-283. doi: 10.1016/S0294-1449(16)30422-X. Google Scholar D. Ruiz, The Schrödinger-Poisson equation under the effect of a nonlinear local term, J. Funct. Anal., 237 (2006), 655-674. doi: 10.1016/j.jfa.2006.04.005. Google Scholar D. Ruiz, On the Schrödinger-Poisson-Slater system: Behavior of minimizers, radial and nonradial cases, Arch. Ration. Mech. Anal., 198 (2010), 349-368. doi: 10.1007/s00205-010-0299-5. Google Scholar E. A. B. Silva and G. F. Vieira, Quasilinear asymptotically periodic Schrödinger equations with critical growth, Calc. Var. Partial Differential Equations, 39 (2010), 1-33. doi: 10.1007/s00526-009-0299-1. Google Scholar J. Stubbe, Bound states of two-dimensional Schrödinger-Newton equations, arXiv: 0807.4059. Google Scholar J. J. Sun and S. W. Ma, Ground state solutions for some Schrödinger-Poisson systems with periodic potentials, J. Differential Equations, 260 (2016), 2119-2149. doi: 10.1016/j.jde.2015.09.057. Google Scholar X. H. Tang, New conditions on nonlinearity for a periodic Schrödinger equation having zero as spectrum, J. Math. Anal. Appl., 413 (2014), 392-410. doi: 10.1016/j.jmaa.2013.11.062. Google Scholar X. H. Tang and B. T. Cheng, Ground state sign-changing solutions for Kirchhoff type problems in bounded domains, J. Differential Equations, 261 (2016), 2384-2402. doi: 10.1016/j.jde.2016.04.032. Google Scholar X. H. Tang and S. T. Chen, Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potential, Disc. Contin. Dyn. Syst., 37 (2017), 4973-5002. doi: 10.3934/dcds.2017214. Google Scholar X. H. Tang and S. T. Chen, Ground state solutions of Nehari-Pohožaev type for Kirchhoff-type problems with general potentials, Calc. Var. Partial Differential Equations, 56 (2017), Art. 110, 25 pp. doi: 10.1007/s00526-017-1214-9. Google Scholar X. H. Tang, X. Y. Lin and J. S. Yu, Nontrivial solutions for Schrödinger equation with local super-quadratic conditions, J. Dyn. Differ. Equ, (2018), 1-15. doi: 10.1007/s10884-018-9662-2. Google Scholar Z. P. Wang and H. S. Zhou, Positive solution for a nonlinear stationary Schrödinger-Poisson system in $\mathbb R^3$, Discrete Contin. Dyn. Syst., 18 (2007), 809-816. doi: 10.3934/dcds.2007.18.809. Google Scholar M. Willem, Minimax Theorems, Birkhäuser, Boston, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar L. G. Zhao and F. K. Zhao, On the existence of solutions for the Schrödinger-Poisson equations, J. Math. Anal. Appl., 346 (2008), 155-169. doi: 10.1016/j.jmaa.2008.04.053. Google Scholar Sitong Chen, Junping Shi, Xianhua Tang. Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 5867-5889. doi: 10.3934/dcds.2019257 Xu Zhang, Shiwang Ma, Qilin Xie. Bound state solutions of Schrödinger-Poisson system with critical exponent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 605-625. doi: 10.3934/dcds.2017025 Xianhua Tang, Sitong Chen. Ground state solutions of Nehari-Pohozaev type for Schrödinger-Poisson problems with general potentials. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4973-5002. doi: 10.3934/dcds.2017214 Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang. Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2299-2324. doi: 10.3934/cpaa.2019104 Margherita Nolasco. Breathing modes for the Schrödinger-Poisson system with a multiple--well external potential. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1411-1419. doi: 10.3934/cpaa.2010.9.1411 Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1663-1693. doi: 10.3934/cpaa.2019079 Zhengping Wang, Huan-Song Zhou. Positive solution for a nonlinear stationary Schrödinger-Poisson system in $R^3$. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 809-816. doi: 10.3934/dcds.2007.18.809 Chunhua Wang, Jing Yang. Positive solutions for a nonlinear Schrödinger-Poisson system. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5461-5504. doi: 10.3934/dcds.2018241 Zhi Chen, Xianhua Tang, Ning Zhang, Jian Zhang. Standing waves for Schrödinger-Poisson system with general nonlinearity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 6103-6129. doi: 10.3934/dcds.2019266 Antonio Azzollini, Pietro d'Avenia, Valeria Luisi. Generalized Schrödinger-Poisson type systems. Communications on Pure & Applied Analysis, 2013, 12 (2) : 867-879. doi: 10.3934/cpaa.2013.12.867 Yi He, Lu Lu, Wei Shuai. Concentrating ground-state solutions for a class of Schödinger-Poisson equations in $\mathbb{R}^3$ involving critical Sobolev exponents. Communications on Pure & Applied Analysis, 2016, 15 (1) : 103-125. doi: 10.3934/cpaa.2016.15.103 Zhanping Liang, Yuanmin Song, Fuyi Li. Positive ground state solutions of a quadratically coupled schrödinger system. Communications on Pure & Applied Analysis, 2017, 16 (3) : 999-1012. doi: 10.3934/cpaa.2017048 Mingzheng Sun, Jiabao Su, Leiga Zhao. Infinitely many solutions for a Schrödinger-Poisson system with concave and convex nonlinearities. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 427-440. doi: 10.3934/dcds.2015.35.427 Claudianor O. Alves, Minbo Yang. Existence of positive multi-bump solutions for a Schrödinger-Poisson system in $\mathbb{R}^{3}$. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 5881-5910. doi: 10.3934/dcds.2016058 Qiangchang Ju, Fucai Li, Hailiang Li. Asymptotic limit of nonlinear Schrödinger-Poisson system with general initial data. Kinetic & Related Models, 2011, 4 (3) : 767-783. doi: 10.3934/krm.2011.4.767 Amna Dabaa, O. Goubet. Long time behavior of solutions to a Schrödinger-Poisson system in $R^3$. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1743-1756. doi: 10.3934/cpaa.2016011 Juntao Sun, Tsung-Fang Wu, Zhaosheng Feng. Non-autonomous Schrödinger-Poisson system in $\mathbb{R}^{3}$. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1889-1933. doi: 10.3934/dcds.2018077 Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1547-1565. doi: 10.3934/cpaa.2019074 Alex H. Ardila. Stability of ground states for logarithmic Schrödinger equation with a $δ^{\prime}$-interaction. Evolution Equations & Control Theory, 2017, 6 (2) : 155-175. doi: 10.3934/eect.2017009 Marius Ghergu, Gurpreet Singh. On a class of mixed Choquard-Schrödinger-Poisson systems. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 297-309. doi: 10.3934/dcdss.2019021 Sitong Chen Xianhua Tang
CommonCrawl
\begin{document} \title{Challenges of Feature Selection for Big Data Analytics\footnote{It is a preprint version. The final version is to appear in Special Issue on Big Data, IEEE Intelligent Systems.}} \author{ Jundong Li and Huan Liu\\ Computer Science and Engineering\\ Arizona State University, USA\\ \texttt{\{jundongl,huan.liu\}@asu.edu} } \date{} \maketitle \begin{abstract} We are surrounded by huge amounts of large-scale high dimensional data. It is desirable to reduce the dimensionality of data for many learning tasks due to the curse of dimensionality. Feature selection has shown its effectiveness in many applications by building simpler and more comprehensive model, improving learning performance, and preparing clean, understandable data. Recently, some unique characteristics of big data such as data velocity and data variety present challenges to the feature selection problem. In this paper, we envision these challenges of feature selection for big data analytics. In particular, we first give a brief introduction about feature selection and then detail the challenges of feature selection for structured, heterogeneous and streaming data as well as its scalability and stability issues. At last, to facilitate and promote the feature selection research, we present an open-source feature selection repository (scikit-feature), which consists of most of current popular feature selection algorithms. \end{abstract} \noindent \textbf{Keywords.} Feature Selection; Big Data; Repository \section{A Brief Introduction of Feature Selection} Massive amounts of high dimensional data are pervasive in multiple different domains, ranging from social media, e-commerce, bioinformatics, health care, transportation to online education. As an example, we show the growth trend of instance numbers and feature numbers in the UCI machine learning repository~\cite{bache2013uci} in Figure~\ref{fig:UCI}. As can be observed, both the data sample size and feature numbers are continuously growing over time. When applying data mining and machine learning algorithms on high dimensional data, a critical issue is known as curse of dimensionality. It refers to the phenomenon that data becomes sparser in high dimensional space, adversely affecting algorithms designed for low dimensional space. In addition, the existence of high dimensional features will significantly demand more on the computational and memory storage requirements. \begin{figure} \caption{Samples and features growth trend during the past thirty years in the UCI machine learning repository.} \label{fig:UCI} \end{figure} Feature selection, as a type of dimension reduction technique, has been proven to be effective and efficient in handling high dimensional data~\cite{liu2007computational,li2016feature}. It directly selects a subset of relevant features for the model construction. Since feature selection keeps a subset of original features, one of its major merit is that it well maintains the physical meanings of the original feature sets, and gives better model readability and interpretability. Due to this particular reason, it is more widely applied in many real world applications such as gene analysis and text mining. Feature selection obtains relevant features by removing irrelevant and redundant features. The removal of these irrelevant and redundant features reduces the computational and storage costs without significant loss of information or negative degradation of the learning performance. Taking Figure~\ref{fig:featureIllustration} as an example, feature $f_{1}$ is a relevant feature which can separate two classes (clusters) in Figure~\ref{fig:featureIllustration-a}; while in Figure~\ref{fig:featureIllustration-b}, feature $f_{2}$ is considered as a redundant feature w.r.t feature $f_{1}$ since feature $f_{1}$ already can discriminate two classes (clusters) well; in Figure~\ref{fig:featureIllustration-c}, feature $f_{3}$ is an irrelevant feature as it does not contain useful information to separate two classes (clusters). \begin{figure} \caption{Example of relevant, irrelevant and redundant features.} \label{fig:featureIllustration-a} \label{fig:featureIllustration-b} \label{fig:featureIllustration-c} \label{fig:featureIllustration} \end{figure} According to the availability of class labels, we can categorize feature selection algorithms into supervised and unsupervised methods. Supervised feature selection is usually taken as a preprocessing step for the classification/regression task. It chooses features that can discriminate data instances from different classes or regression targets. Since the label information is known a priori, relevance of a feature is normally assessed by its correlation with class labels. On the other hand, unsupervised feature selection is generally applied for the clustering task. Without class labels to guide feature selection, it evaluates feature importance by some alternative criteria such as data similarity, local discriminative information and data reconstruction error. With regard to search strategies, feature selection algorithms can be divided into wrapper methods, filter methods and embedded methods. Wrapper methods typically use the learning performance of a predefined model to evaluate the feature relevance. Specifically, it repeatedly chooses a subset of features and then evaluates the learning performance with these selected features until the highest learning performance is obtained. Since it scans through the whole search space, it is slow and seldom used in practice. Filter methods, on the other hand, do not rely on any learning algorithms and are therefore more efficient. They exploit the characteristics of data to measure the feature relevance. Usually, they measure the scores of features based on some ranking criteria and then return the top ranked features. Since these methods do not explicitly consider the bias of learning algorithms, the selected features may not be optimal for a particular learning task. Embedded methods provide a trade-off solution between filter and wrapper methods which embed the feature selection with the model learning, thus they inherit the merits of wrapper and filter methods: first, they include the interactions with the learning algorithm; and second, they are far more efficient than the wrapper methods since they do not need to evaluate feature sets iteratively. \section{Challenges of Feature Selection} Recently, the popularity of big data presents some challenges for the traditional feature selection task. Meanwhile, some unique characteristics of big data also bring about new opportunities for the feature selection research. In the next few subsections, we will present these challenges of feature selection for big data analytics from the following six aspects. In particular, the challenges of structured features, linked data, multi-source data and multi-view data, streaming data and features are from the perspective of data; while the last two challenges of scalability and stability, are from the performance perspective. \subsection{Structured Features} Most of existing feature selection algorithms are designed for generic data and they are based on a strong assumption that features do not have explicit correlations. In other words, they completely ignore the intrinsic structures among features. For example, these feature selection methods may select the same subset of features even though the features are reshuffled~\cite{ye2012sparse}. In many real world applications, features exhibit various kinds of structures, e.g., spatial or temporal smoothness, disjoint groups, overlap groups, trees and graphs. When applying feature selection algorithms on the datasets with structured features, it is beneficial to explicitly incorporate this prior knowledge, which may improve post learning tasks such as classification and clustering. Next, we will focus on the most three common feature structures, i.e., group structure, tree structure and graph structure. The first structure features may exhibit is group structure. Examples of group structured features include different frequency bands represented as groups in signal processing~\cite{mcauley2005subband} and genes with similar functionalities acting as groups in bioinformatics~\cite{ma2007supervised}. Therefore, when performing feature selection, it is more appealing to explicitly take into consideration the group structure among features. Figure~\ref{fig:group} shows an illustrative example of features with group structures (4 groups). \begin{figure} \caption{Group structures among features.} \label{fig:group} \end{figure} In addition to the group structures, features can also form other kinds of structures such as tree structure. For example, in image processing such as face images, different pixels (features) can be represented as a tree, where the root node indicates the whole face, its child nodes can be the different organs, and each specific pixel is considered as a leaf node. In other words, these pixels enjoy a spatial locality structure. Figure~\ref{fig:tree} shows an example of 8 features with four layers of tree structure. Another motivating example is that genes/proteins may form certain hierarchical tree structures~\cite{jenatton2011structured}. \begin{figure} \caption{Tree structures among features.} \label{fig:tree} \end{figure} \begin{figure} \caption{Graph structures among features.} \label{fig:graph} \end{figure} Features may also form graph structures. For example, in natural language processing, if we take each word as a feature, we have synonyms and antonyms relationships between different words~\cite{fellbaum1998wordnet}. Moreover, many biological studies show that genes tend to work in groups according to their biological functions, and there are strong dependencies between some genes. Since features show some dependencies, we can model the features by an undirected graph, where nodes represent features and edges among nodes show the pairwise dependencies between features. An illustrative example of 7 features with graph structure is shown in Figure~\ref{fig:graph}. \subsection{Linked Data} Linked data becomes ubiquitous in many platforms such as Twitter\footnote{https://twitter.com/} (tweets linked through hyperlinks), social networks in Facebook\footnote{https://www.facebook.com/} (users connected by friendships) and biological networks (protein interactions). Since linked data are correlated with each other by different types of links, they are distinct from traditional attribute-value data. Figure~\ref{fig:linkedfeature} presents an illustrative example of linked data and its two representations. Figure~\ref{fig:linkedfeature-a} shows 8 linked instances ($u_{1}$ to $u_{8}$) while Figure~\ref{fig:linkedfeature-b} is a conventional representation of attribute-value data such that each row corresponds to one instance and each column corresponds to one feature. As mentioned above, linked data provides an extra source of information in the form of links, which can be represented by an adjacency matrix, illustrated in Figure~\ref{fig:linkedfeature-c}. The challenges of feature selection for linked data~\cite{li2016toward,li2016robust,tang2012unsupervised} lie in the following three aspects: (1) how to exploit relations among data instances; (2) how to take advantage of these relations for feature selection; and (3) linked data are often unlabeled, how to evaluate the relevance of features without the guide of label information. \begin{figure} \caption{An illustrative example of linked data.} \label{fig:linkedfeature-a} \label{fig:linkedfeature-b} \label{fig:linkedfeature-c} \label{fig:linkedfeature} \end{figure} \subsection{Multi-Source Data and Multi-View Data} In many data mining and machine learning tasks, we have multiple data sources for the same set of data instances. For example, recent advancement in bioinformatics reveal the existence of some non-coding RNA species in addition to the widely used messenger RNA, these non-coding RNA species functions across a variety of biological processes. The availability of multiple data sources makes it possible to address some problems otherwise unsolvable using a single source, since the multi-faceted representations of data can help depict some intrinsic patterns hidden in a single source of data. For multi-source feature selection, we usually have a target source and other sources complement the selection of features on the target source~\cite{zhao2011spectral}. Multi-view sources represent different facets of data instances via different feature spaces. These feature spaces are naturally dependent and also high dimensional, which suggests that feature selection is necessary to prepare these sources for effective data mining tasks such as multi-view clustering. A task of multi-view feature selection thus arises, which aims to select features from different feature spaces simultaneously by using their relations~\cite{tang2013unsupervised,wang2013multi}. For example, we would like to select pixel features, tag features, and text features about images in Flickr\footnote{https://www.flickr.com/} simultaneously. Since multi-view feature selection is designed to select features across multiple views by using their relations, they are naturally different from multi-source feature selection. We illustrate the difference between multi-source feature selection and multi-view feature selection in Figure~\ref{fig:multi-source-multi-view}. \begin{figure} \caption{Differences between multi-source and multi-view feature selection.} \label{fig:Multi-source} \label{fig:Multi-view} \label{fig:multi-source-multi-view} \end{figure} \subsection{Streaming Data and Features} In many scenarios, we are faced with a significant amount of data which need to be processed in a real time to gain insights. In the worst cases, the size of data or the features are unknown or even infinite, thus it is not practical to wait until all data instances or features are available to perform feature selection. For streaming data, one motivating example is that in online spam email detection problem, new emails are constantly arriving, it is not easy to employ batch-mode feature selection methods to select relevant feature in a timely manner. Therefore, some feature selection algorithms are proposed to maintain and update a feature subset when new data streams are constantly arriving. The process of feature selection on data streams is illustrated in Figure~\ref{fig:datastreams}. In some settings when the streaming data cannot be loaded into the memory, one pass of the data is required. We can only make one pass of the data where the second pass is either unavailable or computational expensive. How to select relevant features timely by one pass of data~\cite{huang2015unsupervised} is still a challenging problem. \begin{figure} \caption{A framework of feature selection on data streams.} \label{fig:datastreams} \end{figure} On an orthogonal setting, feature selection for streaming features also has its practical significance. For example, Twitter produces more than 320 millions of tweets everyday and a large amount of slang words (features) are continuously being generated. These slang words promptly grab users’ attention and become popular in a short time. Therefore, it is more preferable to perform streaming feature selection to rapidly adapt to the changes~\cite{li2015unsupervised}. A general framework of streaming feature selection is presented in Figure~\ref{fig:StreamingFS}. At each time step, a typical streaming feature selection algorithm first determines whether to accept the most recently arrived feature; if the feature is added to the selected feature set, it then determines whether to discard some existing features from the selected feature set. The process repeats until no new features show up anymore. \begin{figure} \caption{A framework of streaming feature selection.} \label{fig:StreamingFS} \end{figure} \subsection{Scalability} With the tremendous growth of dataset sizes, the scalability of most current feature selection algorithms may be jeopardized. In many scientific and business applications, data are usually measured in terabyte (1TB = $10^{12}$ bytes). Normally, datasets in the scale of terabytes cannot be loaded into the memory directly and therefore limits the usability of most feature selection algorithms. Currently, there are some attempts to use distributed programming frameworks such as MapReduce and MPI to perform parallel feature selection for very large-scale datasets~\cite{singh2009parallel}. Recently, big data of ultrahigh dimensionality has emerged in many real-world applications such as text mining and information retrieval. Most feature selection algorithms does not scale well on the ultrahigh dimensional data, its efficiency deteriorates quickly or is even computational infeasible. In this case, well-designed feature selection algorithms in linear or sub-linear running time are more preferred. \subsection{Stability} The stability of these algorithms is also an important consideration when developing new feature selection algorithms~\cite{he2010stable}. A motivating example from the field of bioinformatics shows that domain experts would like to see the same set or similar set of genes (features) to be selected each time when they obtain new samples in the small amount of perturbation. Otherwise domain experts would not trust these algorithms when they get quite different sets of features with small data perturbation. It is also found that the underlying characteristics of data may greatly affect the stability of feature selection algorithms and the stability issue may also be data dependent. These factors include the dimensionality of feature, number of data instances, etc. In against with supervised feature selection, stability of unsupervised feature selection algorithms has not be well studied yet. Studying stability for unsupervised feature selection is much more difficult than that of the supervised methods. The reason is that in unsupervised feature selection, we do not have enough prior knowledge about the cluster structure of the data. Thus we are uncertain that if the new data instance, i.e., the perturbation belongs to any existing clusters or will introduce new clusters. \section{Feature Selection Repository} To tackle the challenges of feature selection for big data analytics and to promote the feature selection research in this community, we present an open-source feature selection repository - \emph{scikit-feature} (http://featureselection.asu.edu/). The purpose of this feature selection repository is to collect some widely used feature selection algorithms that have been developed in the feature selection research to serve as a platform for facilitating their application, comparison and joint study. The feature selection repository also effectively assists researchers to achieve more reliable evaluation in the process of developing new feature selection algorithms. Currently, \emph{scikit-feature} consists of popular feature selection algorithms in the following categories: \begin{itemize} \setlength\itemsep{0.3em} \item Similarity based feature selection \item Information theoretical based feature selection \item Statistical based feature selection \item Sparse learning based feature selection \item Wrapper based feature selection \item Structural feature selection \item Streaming feature selection \end{itemize} Among these different categories of feature selection methods, similarity based, information theoretical based, and statistical based methods correspond to the filter methods discussed above. Wrapper based methods and sparse learning based methods correspond to the wrapper methods and embedded methods, respectively. We also include structural features, linked data, multi-view and multi-source data to the category of structural feature selection, and streaming data and features to the streaming feature selection category. In addition, scikit-feature also provides many benchmark feature selection datasets, and evaluation examples on how to evaluate feature selection algorithms via classification or clustering task. The experimental results can be obtained from our repository project website (http://featureselection.asu.edu/datasets.php). For each dataset, we list all applicable feature selection algorithms along with its evaluation on either classification or clustering task. We also provide an interactive tool FeatureMiner~\cite{cheng2016featureminer} to ease the usage of these feature selection algorithms based on the repository. \end{document}
arXiv
\begin{document} \title{Approximation of Mean Field Games to $N$-Player Stochastic Games, with Singular Controls} \author{Haoyang Cao, Xin Guo, and Joon Seok Lee \thanks{Haoyang Cao and Xin Guo are with the Department of Industrial Engineering and Operations Research, University of California at Berkeley, Berkeley, CA, 94720 USA e-mail: [email protected], [email protected].} \thanks{Joon Seok Lee was with Laboratoire de Probabilités et Modèles Aléatoires, CNRS, UMR 7599, Universit\'e Paris Diderot e-mail: [email protected].} } \maketitle \begin{abstract} This paper establishes that $N$-player stochastic games with singular controls, either of bounded velocity or of finite variation, can both be approximated by mean field games (MFGs) with singular controls of bounded velocity. More specifically, it shows i) the optimal control to an MFG with singular controls of a bounded velocity $\theta$ is shown to be an $\epsilon_N$-NE to an $N$-player game with singular controls of the bounded velocity, with $\epsilon_N = O(\frac{1}{\sqrt{N}})$, and (ii) the optimal control to this MFG is an $(\epsilon_N + \epsilon_{\theta})$-NE to an $N$-player game with singular controls of finite variation, where $\epsilon_{\theta}$ is an error term that depends on $\theta$. This work generalizes the classical result on approximation $N$-player games by MFGs, by allowing for discontinuous controls. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart {$N$}{-player} non-zero-sum stochastic games are notoriously hard to analyze. Recently, the theory of Mean Field Games (MFGs), pioneered by \cite{LL2007} and \cite{HMC2006}, presents a powerful approach to study stochastic games of a large population with small interactions. (See the lecture notes and books \cite{BFY2013}, \cite{CDLL2015}, \cite{CarmonaDelarue}, \cite{GLL2010}, and the references therein for more details on MFGs). The key idea behind MFGs is to avoid directly analyzing the difficult $N$-player stochastic games, and instead to approximate the dynamics and the objective function via the notion of population's probability distribution flows, a.k.a., mean information processes. This idea is feasible if an MFG can approximate the corresponding $N$-player game, under proper criteria. The seminal work of \cite{HMC2006} demonstrated that this is indeed the case, and showed that the value function of an $N$-player game under NE can be approximated by the value function of an associated MFG with an error of order $\frac{1}{\sqrt{N}}$. There are more recent progress on higher order error analysis including the central limit theorem and the large deviation principle for MFGs. For instance, \cite{DLR2018b} and \cite{DLR2019} studied diffusion-based models with common noise via the coupling approach, and \cite{BC2018} and \cite{CP2018} analyzed finite state space models without common noise using master equations. As such, MFGs provide an elegant and analytically feasible framework to approximate $N$-player stochastic games. All existing works on approximation of $N$-player stochastic games by MFGs are established within the framework of regular controls where controls are absolutely continuous. However, most control problems from engineering and economics are not absolutely continuous, or even continuous. A natural question is, will this relation between the MFG and the $N$-player game hold when controls may not be continuous? The focus of this paper is to establish, within the singular control framework, the approximation of $N$-player stochastic games by their corresponding MFGs. \paragraph{MFGs and stochastic games with singular controls} Compared with regular controls, singular controls provide a more general and natural mathematical framework where both the controls and the state space may be discontinuous. However, it is well documented that analysis for singular controls is much harder than for regular controls. From a PDE perspective, the associated fully nonlinear PDE is coupled with possibly state and time dependent gradient constraints. From a control perspective, the Hamiltonian for singular controls of finite variation diverges \cite{Pham2009} and the standard stochastic maximal principle fails; even in the case of bounded velocity, the Hamiltonian is discontinuous. In contrast, the existence of solutions to MFGs relies on the assumption that the Hamiltonian $H(x,p)$ has sufficient regularity, especially with respect to $p$. For instance, \cite{LL2007} assumed that $H$ is of class $\mathcal{C}^1$ in $p$, and \cite{Cardaliaguet2013} assumed that $H$ is of class $\mathcal{C}^2$ and that the second-order derivative with respect to $p$ is Lipschitz continuous. The exception is \cite{Lacker2015}, which established in a general framework the existence of Markovian equilibrium solutions when controls are continuous but may not be Lipschitz. \cite{FH2016} adopted the notion of relaxed controls for the existence of solution to MFGs with singular controls and established its approximation by MFGs with regular controls. Nevertheless, the question remains as to whether $N$-player games can be approximated by MFGs, when controls may not be absolutely continuous. \paragraph{Our work} There are two types of singular controls, namely, singular controls of finite variation and singular controls of bounded velocity. This paper establishes that $N$-player stochastic games with singular controls, {\it both} of finite variation and of bounded velocity, can be approximated under the NE criterion by MFGs with singular controls of bounded velocity. This result suggests that one may completely circumvent the more difficult MFGs of singular controls of finite variation, when analyzing stochastic games of singular type, and instead focus on singular controls games of bounded velocity. Indeed, singular controls of bounded velocity share some nice properties with regular controls and are easier to analyze than singular controls of finite variation. This conviction underlines the main idea in our analysis of the relation between MFGs and the associated $N$-player stochastic games. The analysis starts with two basic components. First is the relationship between the underlying singular control problems, bounded velocity vs finite variation. Theorem~\ref{thetainfty} shows that under proper assumptions, the value function of the former converges to that of the latter. Second is on the existence, the uniqueness, and the regularity for the solution to the MFG with singular controls of bounded velocity, established in Theorem~\ref{mainthm}. These two ingredients lead to the main theorem on approximation of MFGs to the corresponding $N$-player games. Specifically, (i) given a bounded velocity $\theta$, the optimal control to the MFG with singular controls of bounded velocity is an $\epsilon_N$-NE to an $N$-player game with singular controls of bounded velocity with $\epsilon_N = O(\frac{1}{\sqrt{N}})$, and (ii) the optimal control to the MFG is an $(\epsilon_N + \epsilon_{\theta})$-NE to an $N$-player game with singular controls of finite variation, where $\epsilon_{\theta}$ is an error term that depends on $\theta$. \paragraph{Other related work} There are earlier works relating singular controls with bounded velocity and with finite variation. For instance, exploiting this relation enables \cite{MT1989} to establish the existence of the optimal singular control of finite variation for a controlled Brownian motion. This relation is also analyzed in \cite{HPY2016} for a monotone follower type of singular controls. None of these works is in a game setting. Moreover, to establish the relation between MFGs and $N$-player games in a singular control framework, one needs more explicit construction for the optimal control policies. Recently, \cite{BBC2018} proposed a Markov chain based approximation approach for numerically solving MFGs with reflecting barriers and showed its convergence. More recently, \cite{DF2018} showed that, under the notion of weak (distributional) NE, $N$-player stochastic games with singular controls of finite variation can be approximated by that of bounded velocity, if the set of NEs for the latter is relatively compact under an appropriate topology. The focus and approach of these works are different from ours. Finally, the existence of Markovian NE solution for MFGs in Theorem \ref{mainthm} was established in \cite{Lacker2015} in a more general class of MFGs. His approach is sophisticated and consists of two main steps. The first step is the existence of a weak solution under the convexity assumption, and the second step is to go through a measurable selection argument to show that this weak solution is in fact the desirable one. Our approach is to directly construct the Markov NE using the fixed point approach, based on the special structure of the game. This yields more explicit solution structure with additional regularity properties, which are necessary for the subsequent analysis to connect MFGs and the associated $N$-player games. \section{Problem formulations and main results} \label{setup} We start with $(\Omega, \mathcal{F}, \mathbb{F} =(\mathcal{F}_t)_{0\leq t \leq \infty}, P)$ a probability space in which $W^i = \{W_t^i\}_{0\leq t\leq \infty}$ are i.i.d. standard Brownian motion with $i=1,\ldots,N<\infty$. Let $\mathcal{P} (\mathbb{R}) $ be the set of all probability measures on $\mathbb{R}$, and $\mathcal{P}_p (\mathbb{R}) $ be the set of all probability measures of $p$th order on $\mathbb{R}$. That is $$\mathcal{P}_p (\mathbb{R}) = \biggl\lbrace \mu \in \mathcal{P} (\mathbb{R}) \biggl| \biggl(\int_\mathbb{R} |x|^p \mu( dx)\biggl )^\frac{1}{p} < \infty \biggr\rbrace.$$ To define the flow of probability measures $\{\mu_t\}_{t\ge 0}$, let us recall the $p$th order Wasserstein metric on $ \mathcal{P}_p(\mathbb{R})$ defined as $$D^p(\mu,\mu')= \inf_{\tilde{\mu} \in \Gamma(\mu, \mu') } \limits \biggl(\int_{\mathbb{R}\times\mathbb{R}} |y-y'|^p \tilde{\mu} (dy,dy') \biggl)^{\frac{1}{p}},$$ where $\Gamma(\mu, \mu')$ is the set of all coupling of $\mu $ and $\mu' $. Denote $C([0, T], \mathcal{P}_2 (\mathbb{R}))$ for all continuous mappings from $[0, T]$ to $\mathcal{P}_2 (\mathbb{R})$. Then $\mathcal{M}_{[0,T]} \subset C([0, T],\mathcal{P}_2 (\mathbb{R}))$ is a class of flows of probability measures such that there exists a positive constant $c$ so that \begin{align*} \mathcal{M}_{[0,T]} = &\biggl\{ \{\mu_t\}_{0\le t\le T} \biggl| \sup_{s\neq t}\frac{ D^1(\mu_t,\mu_s) }{|t-s|^{\frac{1}{2}}} \leq c, \sup_{t \in [0,T]} \int_\mathbb{R} |x|^2 \mu_t(dx) \leq c\biggl\}. \end{align*} $\mathcal{M}_{[0,T]}$ is a metric space endowed with the metric \begin{align}\label{metric1} \hspace{-10pt}d_\mathcal{M}\biggl(\{\mu_t\}_{0\le t\le T},\{\mu_t'\}_{0\le t\le T}\biggr) = \sup_{0\le t\le T} D^2(\mu_t,\mu_t'). \end{align} Throughout, we will use $Lip(\psi)$ as a Lipschitz coefficient of $\psi$ for any given Lipschitz function $\psi$. That is, ${|\psi(x)-\psi(y) | \leq Lip(\psi) |x-y|}$ for any $x,y \in \mathbb{R}$, we will use $$\mathcal{L} \psi (x) = b(x) \partial_x \psi (x)+\frac{1}{2} \sigma^2 (x) \partial_{xx} \psi (x),$$ for the infinitesimal generator for any stochastic process $$dx_t = b(x_t) dt + \sigma (x_t) dW_t,$$ and any $\psi (x) \in \mathcal{C}^2.$ And we say that a function $f$ is of a polynomial growth if $|f (x)| \leq c(|x|^k+1) $ for some constants $c$ and $k$, for all $x$. \subsection{Problems of $N$-player stochastic games} \paragraph{$N$-player game with singular controls of finite variation.} Fix a time $T <\infty$ and suppose that there are $N$ identical players in the game. Denote $ \{x_t^i\}_{s \leq t \leq T}$ as the state process in $\mathbb{R}$ for player $i$ ($i = 1, \ldots, N$), with $x_{s-}^i=x^i$ starting from time $s\in [0,T]$. Now assume that the dynamics of $\{x_t^i\}$ follows, for $s\le t \le T$, \begin{equation} \label{nSDE} \hspace{-10pt} dx_t^i = \frac{1}{N}\sum_{j=1}^N b_0(x_t^i,x_t^j) dt + \sigma dW_t^i +d\xi_t^{i+}-d\xi_t^{i-}, x_{s-}^i =x^i, \end{equation} where $b_0: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is bounded, Lipschitz continuous, and $\sigma$ is a positive constant. Here $ \xi_\cdot^i = (\xi_\cdot^{i+},\xi_\cdot^{i-}) $ is the control by player $i$ with $ (\xi_\cdot^{i+},\xi_\cdot^{i-}) $ nondecreasing, c\`adl\`ag, $\xi_{s-}^{i+}=\xi_{s-}^{i-} = 0$, $\mathbb{E} \biggl [\int_s^T d\xi_t^{i+} \biggr]< \infty, $ and $\mathbb{E} \biggl[\int_s^T d\xi_t^{i-} \biggr] < \infty $. Given Eqn. (\ref{nSDE}), the objective of player $i$ is to minimize, over an appropriate control set $\mathcal{U}^N$, her cost function $J^{i,N}(s,x^i , \xi_\cdot^{i+},\xi_\cdot^{i-};\xi_\cdot^{ -i} )$. That is, \begin{equation}\label{Nsingular}\tag{N-FV} \begin{aligned} &\inf_{ (\xi_\cdot^{i +}, \xi_\cdot^{i -}) \in \mathcal{U}^N} J^{i,N}(s,x^i,\xi_\cdot^{i+},\xi_\cdot^{i-};\xi_\cdot^{ -i}) =\\ &\hspace{-30pt}\inf_{ (\xi_\cdot^{i +}, \xi_\cdot^{i-}) \in \mathcal{U}^N} \mathbb{E} \biggl[ \int_s^T \frac{\sum_{j=1}^N f_0(x_t^i, x_t^j)}{N} dt+ \gamma_1 d\xi_t^{i+}+ \gamma_2 d\xi_t^{i-} \biggl]. \end{aligned} \end{equation} Here $\xi_\cdot^{ -i}=\{ (\xi_\cdot^{j +}, \xi_\cdot^{j -})\}_{j=1, j \neq i }^N$ denotes the set of controls for all the players except for player $i$, the cost function $f_0: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz continuous, $\gamma_1$ and $\gamma_2 $ are constants, and \begin{align*} &\mathcal{U}^N = \biggl\{ (\xi_\cdot^+,\xi_\cdot^-) ~\biggl|~ \xi_t^\pm \text{ are } \mathcal{F}_t^{(x^1,\dots, x^N)} \text{-adapted, c\'adl\'ag, }\\ &\text{nondecreasing, }\xi_{s-}^\pm =0,\,\mathbb{E} \biggl[\int_s^T d\xi_t^\pm \biggl]<\infty,\forall t\in[s,T] \biggl\}, \end{align*} with $ \{ \mathcal{F}_t^{(x^1,\dots, x^N)}\}_{s\le t\le T} $ the natural filtration of $\{x_t^1,\ldots, x_t^N\}_{s\le t\le T}.$ \paragraph{$N$-player game with singular controls of bounded velocity.} If one restricts the controls $(\xi_\cdot^{i+},\xi_\cdot^{i-})$ to be with a bounded velocity such that for a given constant $\theta >0 $, $d\xi_t^{i+} = \dot{\xi}_t^{i+}dt, \, d\xi_t^{i-} = \dot{\xi}_t^{i-} dt$, with $0 \leq \dot{\xi}_t^{i+}, \dot{\xi}_t^{i-} \leq \theta$. Then game (\ref{Nsingular}) becomes \begin{equation}\label{Nbound}\tag{N-BD} \begin{aligned} &\inf_{(\xi_\cdot^{i +}, \xi_\cdot^{i -}) \in \mathcal{U}_{\theta}^N} J^{i,N}_{\theta} (s,x^i,\xi_\cdot^{i+},\xi_\cdot^{i-};\xi_\cdot^{ -i} ) = \\ &\hspace{-30pt}\inf_{ (\xi_\cdot^{i +}, \xi_\cdot^{i -}) \in \mathcal{U}_\theta^N} \mathbb{E} \biggl[ \int_s^T \frac{\sum_{j=1}^N f_0(x_t^i, x_t^j)}{N}dt+ \gamma_1 \dot{\xi}_t^{i+}dt +\gamma_2\dot{\xi}_t^{i-}dt \biggl], \end{aligned} \end{equation} subject to \begin{equation*} \begin{aligned} & \hspace{-30pt}dx_t^i = \frac{\sum_{j=1}^N b_0(x_t^i,x_t^j)}{N} dt + \sigma dW_t^i +\dot{\xi}_t^{i+}dt-\dot{\xi}_t^{i-}dt, \,x_s^i=x^i. \end{aligned} \end{equation*} Here the admissible set is given by $$\begin{aligned} \hspace{-10pt} \mathcal{U}_{\theta}^N = &\biggl\{ (\xi_\cdot^+,\xi_\cdot^-) \biggl| (\xi_\cdot^+,\xi_\cdot^-) \in \mathcal{U}^N ,\,\dot{\xi}_t^\pm\in[0,\theta],\,\forall t\in[s,T] \biggl\}. \end{aligned}$$ There are several criteria to analyze stochastic games. Two standard ones are the pareto optimality and the Nash equilibrium (NE). In this paper we will focus on NE. Depending on the problem setting and in particular the admissible controls, there are several forms of NEs, including the open loop NE, the closed loop NE, and the closed loop in feedback form NE (a.k.a., the Markovian NE). Throughout the paper, we will consider the Markovian NE, meaning that the controls are deterministic functions of time $t$, current state $x_t$, and a fixed measure $\mu_t$. More precisely, \begin{definition}[Markovian $\epsilon$-Nash equilibrium to (\ref{Nsingular})] A Markovian control $(\xi_\cdot^{i*+}, \xi_\cdot^{i*-}) \in \mathcal{U}^N$ for $i = 1,\ldots, N$ is a \emph{Markovian $\epsilon$-Nash equilibrium} to \emph{(\ref{Nsingular})} if for any $i \in \{1, \ldots, N\}$, any $(s,x) \in [0,T]\times \mathbb{R}$ and any Markovian $(\xi_\cdot^{i'+},\xi_\cdot^{i'-}) \in \mathcal{U}^N$, $$\begin{aligned} &E_{x_{s-}^{N}}\biggl[J^{i,N} (s,x_{s-}^{N},\xi_\cdot^{i'+},\xi_\cdot^{i'-};\xi_\cdot^{*-i} )\biggl] \\ &\hspace{10pt}\geq E_{x_{s-}^{N}}\biggl[J^{i,N} (s,x_{s-}^N,\xi_\cdot^{i*+},\xi_\cdot^{i*-};\xi_\cdot^{*-i} )\biggl] -\epsilon. \end{aligned}$$ \end{definition} \begin{definition}[Markovian $\epsilon$-Nash equilibrium to (\ref{Nbound})] A Markovian control $(\xi_\cdot^{i*+}, \xi_\cdot^{i*-}) \in \mathcal{U}_{\theta}^N$ for $i = 1,\ldots, N$ is a \emph{Markovian $\epsilon$-Nash equilibrium} to \emph{(\ref{Nbound})} if for any $i \in \{1, \ldots, N\}$, any $(s,x) \in [0,T]\times \mathbb{R}$ and any Markovian $(\xi_\cdot^{i'+},\xi_\cdot^{i'-}) \in \mathcal{U}_{\theta}^N$, $$ \begin{aligned} &E_{x_{s-,\theta}^N}\biggl[J^{i,N}_{\theta} (s,x_{s-,\theta}^N,\xi_\cdot^{i'+},\xi_\cdot^{i'-};\xi_\cdot^{*-i} )\biggl] \\ &\hspace{10pt}\geq E_{x_{s-,\theta}^N}\biggl[J^{i,N}_{\theta} (s,x_{s-,\theta}^N,\xi_\cdot^{i*+},\xi_\cdot^{i*-};\xi_\cdot^{*-i} ) \biggl]-\epsilon. \end{aligned}$$ \end{definition} We will show that both game (\ref{Nbound}) and game (\ref{Nsingular}) can be approximated by MFGs with singular controls of bounded velocity, as introduced below. \paragraph{MFGs with singular controls of bounded velocity.} Assume that all $N$ players are identical. That is, for each time $t \in [0,T]$, all $x_t^i$ have the same probability distribution. Define $ \mu_t = \lim_{N \rightarrow \infty} \limits \frac{1}{N} \sum_{i =1}^N \limits \delta_{x_t^i}$ as a limit of the empirical distributions of $x_t^i$. Then, according to Strong Law of Large Numbers (SLLN), as $N\rightarrow \infty$, \begin{align*} &\frac{1}{N}\sum_{j=1}^N b_0(x_t ,x_t^j) \rightarrow \int_\mathbb{R} b_0(x_t ,y) \mu_t(dy)=b(x_t ,\mu_t) , \\ &\frac{1}{N}\sum_{j=1}^N f_0(x_t , x_t^j) \rightarrow \int_\mathbb{R} f_0(x_t , y) \mu_t(dy) =f(x_t , \mu_t), \end{align*} subject to appropriate technical conditions. Here $b, f:\mathbb{R} \times \mathcal{P}_1(\mathbb{R}) \rightarrow \mathbb{R}$ are functions satisfying assumptions to be specified later. That is, instead of game (\ref{Nbound}), one can solve for a pair of control $\{\xi^*_t\}_{t\in[0,T]}=\{(\xi_t^{*,+},\xi_t^{*-})\}_{t\in[0,T]}$ with the mean information $\{\mu^*_t\}_{t\in[0,T]}$ such that \begin{enumerate} \item Under $\{\mu^*_t\}_{t\in[0,T]}$, $\{(\xi_t^{*,+},\xi_t^{*-})\}_{t\in[0,T]}$ is an optimal strategy for \begin{equation}\label{MFGbounded1}\tag{MFG-BD} \begin{aligned} &\hspace{-20pt}v_{\theta} (s, x|\{\mu^*_t\} ) := \inf_{(\xi_\cdot^{ +}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta}} J_{\theta} (s, x , \xi_\cdot^{ +}, \xi_\cdot^{ -} |\{\mu^*_t\}):= \\ &\hspace{-50pt}\inf_{ (\xi_\cdot^{+}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta}} \mathbb{E}_{\mu^*_s}\biggl[ \int_s^T \biggl( f(x_t, \mu_t)+ \gamma_1\dot{\xi}_t^{+}+\gamma_2\dot{\xi}_t^{-} \biggl) dt|x_s=x\biggl], \end{aligned} \end{equation} subject to \begin{equation}\label{dynamics-bdd} \begin{aligned} \hspace{-50pt} dx_t &= \biggl( b(x_t,\mu_t^*) + \dot{\xi}_t^{+}- \dot{\xi}_t^{-} \biggl) dt + \sigma dW_t , \, x^*_s & \sim \mu^*_s, \end{aligned} \end{equation} where \begin{align*} &\hspace{-10pt}\mathcal{U}_{\theta} = \biggl\{ (\xi_\cdot^+,\xi_\cdot^-) \biggl| \xi_t^\pm\text{ are } \mathcal{F}_t^{(x_{t-})} \text{-adapted, c\'adl\'ag,}\\ &\hspace{-15pt}\text{nondecreasing, } \xi_{s}^\pm=0, \dot{\xi}_t^\pm\in[0,\theta],\,\mathbb{E} \biggl[\int_s^T d\xi_t^\pm \biggl] <\infty,\, \\&\hspace{-10pt}\forall t \in[s, T] \biggl\}, \end{align*} \noindent with $ \{ \mathcal{F}_t^{(x_{t-})} \}_{s\le t\le T} $ the filtration of $ \{(x_{t-})\}_{s\le t\le T} $. When $\theta\to \infty$, we simply write $\mathcal{U}$ instead of $\mathcal{U}_{\infty}$ for notational simplicity. \item $\mu_t^*$ is the probability distribution of $x_t^*$ which is given by $$\begin{aligned} &dx_t^{*} = \biggl( b(x_t^{*},\mu_t^{*}) + \dot{\xi}_t^{*+}- \dot{\xi}_t^{*-} \biggl) dt + \sigma dW_t ,\\ &s\le t\le T, \quad x_s^{*} \sim \mu_s^*. \end{aligned}$$ \end{enumerate} Such a pair $(\xi^{*,+}_\cdot,\xi^{*,-}_\cdot)\in\mathcal{U}_{\theta}$ and $\{\mu_t^*\}\in\mathcal{M}_{[0,T]}$ constitute a solution of \eqref{MFGbounded1}. \begin{remark} \label{remark-fixedinitial} Note here the game value $v_{\theta}(s, x|\{\mu^*_t\})=\inf_{(\xi_\cdot^{ +}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta}} J_{\theta} (s, x , \xi_\cdot^{ +}, \xi_\cdot^{ -} |\{\mu^*_t\})$ with $x_s^*=x$ being a sample from $\mu^*_s$. An alternative definition of the game is to solve $\tilde{v}_{\theta}(s, \mu^*_s)$ with $\tilde{v}_{\theta}(s, \mu^*_s)=\mathbb{E}_{\mu^*_s}[v_{\theta}(s, x_s)]$. This game value can be easily recovered from $v_{\theta}(s,x)$. (See also \cite{GX2019} and \cite[Section 2.2.2]{LZ2018} for a similar set up.) \end{remark} For ease of exposition, we will use the following notion of control function, for a fixed $\mu_t$. \begin{definition} [Control function] A control of bounded velocity $\xi_t$ is called \emph{Markovian} if $ d\xi_t = \dot{\xi}_t dt = \varphi (t,x_t|\{\mu_t\}) dt$ for some function $\varphi:[0,T]\times \mathbb{R}\rightarrow \mathbb{R}$. $\varphi(t,x_t|\{\mu_t\})$ is called the \emph{control function} for the fixed $\{\mu_t\}$. A control of a finite variation $\xi_t$ is called \emph{Markovian} if $ d\xi_t = d\varphi(t,x_t|\{\mu_t\})$ for some function $\varphi$. $\varphi$ is called the \emph{control function} for the fixed $\{\mu_t\}$. \end{definition} \subsection{Main results} \subsubsection{Technical Assumptions.} The main results are derived based on the following assumptions. Unless specified otherwise, $c$ denotes some constant whose value may vary under different contexts. \begin{itemize} \item[(A1).] There exists some constant $c$ such that $|b_0(x,y)|\leq c$, and $b_0(x,y)$ and $f_0(x,y)$ are Lipschitz continuous in both $x$ and $y$ in the sense that $|b_0(x_1,y_1)-b_0(x_2,y_2)|\leq Lip(b_0)(|x_1-x_2|+|y_1-y_2|)$ and $|f_0(x_1,y_1)-f_0(x_2,y_2)|\leq Lip(f_0)(|x_1-x_2|+|y_1-y_2|)$, for some $Lip(b_0)$, $Lip(f_0)>0$. In addition, $|b(x,\mu)| \le c$, and $b(x,\mu)$ and $ f(x,\mu) $ are Lipschitz continuous in $x$ and $\mu$ in the sense that $| b(x_1,\mu^1)- b(x_2,\mu^2)| \le Lip(b)( |x_1-x_2| + D^1(\mu^1,\mu^2))$ and $| f(x_1,\mu^1)- f(x_2,\mu^2)| \le Lip(f)( |x_1-x_2| + D^1(\mu^1,\mu^2))$, for some $Lip(b)$, $Lip(f) >0$. \item[(A2).] $f(x,\mu) $ has a first-order derivative in $x$ with $f(x,\mu)$ and $\partial_x f(x,\mu)$ satisfying the polynomial growth condition. Moreover, for any fixed $\mu \in \mathcal{P}_2(\mathbb{R})$, $f(x,\mu)$ is convex and nonlinear in $x$. Moreover, there exists some constant $c$ satisfying $|f(x,\mu)| \leq c\biggl(1 + |x|^2+ \int_\mathbb{R} y^2 \mu(dy) \biggl)$ for any $x\in \mathbb{R}, \mu \in \mathcal{P}_2(\mathbb{R})$. Note that this assumption is well-posed: by definition of $\mathcal{M}_{[0,T]}$, $\mu \in \mathcal{P}_2$. \item[(A3).] $b(x,\mu) $ has first- and second-order derivatives with respect to $x$ with uniformly continuous and bounded derivatives in $x$. \item[(A4).] $-\gamma_1<\gamma_2$. This ensures the finiteness of the value function. Indeed, take game (\ref{Nsingular}) with $ -\gamma_1 > \gamma_2$. Then, letting $d\xi_t^{i+} = d\xi_t^{i-} = M $ and $M \rightarrow \infty$, we will have $J^{i,N}\rightarrow -\infty$. \item[(A5).] (Monotonicity of the cost function) Either \begin{align*} \mbox{(i).} &\int_\mathbb{R} (f(x,\mu^1) - f(x,\mu^2)) (\mu^1 -\mu^2) (dx) \geq 0, \\ &\text{ for any } \mu^1 , \mu^2 \in \mathcal{P}_2 (\mathbb{R}) , \end{align*} and $H(x,p ) = \inf_{\dot{\xi}^+,\dot{\xi}^- \in [0,\theta] } \limits \{ (\dot{\xi}^+ -\dot{\xi}^- )p +\gamma_1 \dot{\xi}^+ + \gamma_2 \dot{\xi}^- \} $ satisfies the following condition for any $x,p,q \in \mathbb{R}$ \begin{align*} &\text{if }H(x,p+q) - H(x,p) - \partial_p H(x,p) q = 0, \\ &\text{ then } \partial_p H(x,p+q) = \partial_p H(x,p), \ \ \ \mbox{or} \end{align*} \begin{align*} \mbox{(ii).} &\int_\mathbb{R} (f(x,\mu^1) - f(x,\mu^2)) (\mu^1 -\mu^2) (dx)> 0, \\ &\text{ for any } \mu^1 \neq \mu^2 \in \mathcal{P}_2 (\mathbb{R}). \end{align*} As in \cite{LL2007}, Assumption (A5) is critical to ensure the uniqueness for the solution of (\ref{MFGbounded1}), as will be clear from the proof of Proposition \ref{uniq} for the uniqueness of the fixed point. \item[(A6).] (Rationality of players) For any control function $ \varphi $, any $t \in [0,T], $ any fixed $\{\mu_t\}$, and any $ x,y \in \mathbb{R}$, $(x-y)\biggl( \varphi (t,x|\{\mu_t\})- \varphi (t,y|\{\mu_t\})\biggl) \leq 0 $. Intuitively, this assumption says that the better off the state of an individual player, the less likely the player exercises controls, in order to minimize her cost. This assumption first appeared in \cite{EKPPQ1997} in the analysis of BSDEs. \end{itemize} \begin{mainthm} Assume \emph{(A1)--(A6)}. Then, \begin{itemize} \item[a).] For any fixed $\theta$, the optimal control to \emph{(\ref{MFGbounded1})} is an $\epsilon_{N }$-NE to \emph{(\ref{Nbound})}, given that the distribution of $x_{s,\theta}^N$ at any given initial time $s\in[0,T]$ among $N$ players are permutation invariant. Here $\epsilon_{N } = O\biggl(\frac{1}{\sqrt{N}}\biggl)$; \item[b).] The optimal control to \emph{(\ref{MFGbounded1})} is an $(\epsilon_N + \epsilon_\theta)$-NE to \emph{(\ref{Nsingular})}, given that the distribution of $x_{s}^N$ at any given initial time $s\in[0,T]$ among $N$ players are permutation invariant. Here $\epsilon_{N } = O\biggl(\frac{1}{\sqrt{N}}\biggl)$, and $\epsilon_\theta \rightarrow 0$ as $\theta \rightarrow \infty $. \end{itemize} \label{Thm-Nash} \end{mainthm} \section{Derivation of the main Theorem} The relationship between the stochastic games (\ref{Nsingular}), (\ref{Nbound}), and (\ref{MFGbounded1}) is built in three steps. The first step concerns the analysis of the associated stochastic control problem for (\ref{MFGbounded1}). \subsection{Control problems} To start, we introduce the underlying stochastic control problems. \subsubsection{Control Problem of Bounded Velocity} Let $\{\mu_t\} \in \mathcal{M}_{[0,T]}$ be a fixed exogenous flow of probability measures, and consider the following control problem, \begin{align} \label{Control}\tag{Control-BD} \begin{split} &\ \ \ \ v_{\theta} (s,x |\{\mu_t\}) \\ & \triangleq \inf_{(\xi_\cdot^{ +}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta} } J_{\theta} (s,x , \xi_\cdot^{ +}, \xi_\cdot^{ -}| \{\mu_t\}) \\& = \inf_{ (\xi_\cdot^{+}, \xi_\cdot^{-})\in \mathcal{U}_{\theta} } \mathbb{E} \biggl[ \int_s^T \biggl( f(x_t, \mu_t)+ \gamma_1 \dot{\xi}_t^{+}+\gamma_2 \dot{\xi}_t^{-} \biggl) dt \biggl] \end{split} \end{align} subject to $dx_t=b(x_t, \mu_t)dt+\sigma dW_t, x_s=x$. If controls are of finite variation, that is, $\theta=\infty$, then we have \subsubsection{Control Problem of Finite Variation} \begin{equation} \label{Control-FV} \tag{Control-FV} \begin{aligned} &\hspace{-10pt}v(s, x |\{\mu_t\} ) \triangleq \inf_{ (\xi_\cdot^{+}, \xi_\cdot^{-}) \in \mathcal{U}} \mathbb{E} \biggl[ \int_s^T f(x_t, \mu_t) dt + \gamma_1 d{\xi}_t^{+}+\gamma_2 d{\xi}_t^{-} \biggl] \end{aligned}\end{equation} subject to \begin{equation*} dx_t = b(x_t,\mu_t) dt + \sigma dW_t+ d\xi_t^{+}- d\xi_t^{-} , \quad x_{s-} = x. \end{equation*} Note that problem (\ref{Control}) is a classical stochastic control problem. The associated HJB equation with the terminal condition $v_{\theta} (T, x|\{\mu_t\})=0$ is given by \begin{align} \begin{split} - \partial_t v_{\theta} &= \inf_{\dot{\xi}^+,\dot{\xi}^- \in [0,\theta]} \biggl\{ \biggl( b(x,\mu ) + (\dot{\xi}^+-\dot{\xi}^-) \biggl) \partial_x v_{\theta} \biggl.\\ &\biggl.{} + \biggl(f(x ,\mu ) +\gamma_1\dot{\xi} ^+ + \gamma_2 \dot{\xi}^- \biggl) \biggl\} + \frac{\sigma^2}{2}\partial_{xx} v_{\theta} \\&=\min \biggl\lbrace ( \partial_x v_{\theta}+ \gamma_1)\theta,(- \partial_x v_{\theta} + \gamma_2)\theta, 0 \biggl\rbrace \\ &+b(x,\mu ) \partial_x v_{\theta} +f(x ,\mu )+ \frac{\sigma^2}{2}\partial_{xx} v_{\theta}. \end{split} \label{HJBHJBHJB} \end{align} \begin{proposition}\label{optimization} Assume \emph{(A1)--(A4)}. The HJB Eqn. \emph{(\ref{HJBHJBHJB})} has a unique solution $v $ in $ C^{1,2}( [0,T] \times \mathbb{R})$ with a polynomial growth. Furthermore, this solution is the value function to problem \emph{(\ref{Control})}, and the corresponding optimal control function is \begin{equation}\label{optcontrols} \begin{aligned} &\hspace{-40pt}\varphi_\theta (t,x_t|\{\mu_t\})= \begin{cases} \theta,\,\partial_x v_{\theta} (t,x_t|\{\mu_t\}) \leq -\gamma_1, \\ 0,\,-\gamma_1 < \partial_x v_{\theta} (t,x_t|\{\mu_t\}) < \gamma_2, \\ -\theta ,\,\gamma_2 \leq \partial_x v_{\theta} (t, x_t|\{\mu_t\}). \end{cases} \end{aligned}\end{equation} Moreover, the optimal control function $\varphi_\theta (t,x|\{\mu_t\})$ is unique and so is the optimally controlled state process $x_{t,\theta}$ with $$\begin{aligned}&\hspace{-30pt}dx_{t,\theta} = \biggl( b(x_{t,\theta},\mu_t) + \varphi_\theta (t,x_{t,\theta}|\{\mu_t\}) \biggl) dt + \sigma dW_t , \,x_{s,\theta} = x.\end{aligned}$$ \end{proposition} \begin{proof} By~\cite[Theorem 6.2, Chapter VI]{FR2012}, the HJB Eqn. (\ref{HJBHJBHJB}) has a unique solution $w$ in $C^{1,2}( [0,T] \times \mathbb{R})$ with a polynomial growth. Standard verification argument will show that it is the value function to problem (\ref{Control}). Moreover, the optimal control function is $$\varphi_\theta (t,x_t|\{\mu_t\})= \begin{cases} \theta,\,\partial_x v_{\theta} (t,x_{t,\theta}|\{\mu_t\}) \leq -\gamma_1, \\ 0,\,-\gamma_1 < \partial_x v_{\theta} (t,x_{t,\theta}|\{\mu_t\}) < \gamma_2, \\ -\theta,\,\gamma_2 \leq \partial_x v_{\theta} (t,x_{t,\theta}|\{\mu_t\}). \end{cases} $$ Now, by Proposition \ref{optimization}, there exists a unique value function $v_{\theta} (t,x|\{\mu_t\}) $ to problem (\ref{Control}). Furthermore, by (\ref{optcontrols}), the optimal control function $\varphi_\theta (t,x|\{\mu_t\})$ is uniquely determined. Let us prove that the optimally controlled state process $x_{t,\theta}$ exists and is unique. For any given fixed $x_{t,\theta}^n$, consider a mapping $\Phi$ such that $\Phi(x_{t,\theta}^n) = x_{t,\theta}^{n+1}$ where $x_{t,\theta}^{n+1}$ is a solution to the following SDE: \begin{equation} \label{mapeqn} \begin{aligned} &dx_{t,\theta}^{n+1} = \biggl( b(x_{t,\theta}^n,\mu_t) + \varphi_\theta (t,x_{t,\theta}^{n+1}|\{\mu_t\}) \biggl) dt + \sigma dW_t ,\\ &x_{s,\theta}^{n+1} = x. \end{aligned}\end{equation} By~\cite{Z1974}, for any given $x_{t,\theta}^n$, the SDE (\ref{mapeqn}) has a unique solution $x_{t,\theta}^{n+1}$, so the mapping $\Phi$ is well defined. Then, for any $n\in \mathbb{N}$, \begin{align*} d(x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}) = &\biggl( b(x_{t,\theta}^n,\mu_t)-b(x_{t,\theta}^{n+1} ,\mu_t) + \biggl.\\ &\hspace{-30pt}\biggl.\varphi_\theta (t,x_{t,\theta}^{n+1 }|\{\mu_t\}) - \varphi_\theta (t,x_{t,\theta}^{n+2} |\{\mu_t\}) \biggl) dt. \end{align*} Because $\varphi_\theta (t,x|\{\mu_t\}) $ is nonincreasing in $x$, \begin{align*} &d(x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2})^2\\ & =2(x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2})\biggl( b(x_{t,\theta}^n,\mu_t)-b(x_{t,\theta}^{n+1} ,\mu_t)\biggl.\\ &\hspace{10pt}\biggl.{} + \varphi_\theta (t,x_{t,\theta}^{n+1}|\{\mu_t\}) - \varphi_\theta (t,x_{t,\theta}^{n+2} |\{\mu_t\}) \biggl) dt\\ & \leq 2 Lip(b) |x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}| | x_{t,\theta}^{n}-x_{t,\theta}^{n+1}| dt\\ & \leq Lip(b) \biggl( |x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}|^2+ | x_{t,\theta}^{n}-x_{t,\theta}^{n+1}|^2 \biggl) dt. \end{align*} By Gronwall's inequality, for any $t\in [0,T]$, \begin{eqnarray*} && |x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}|^2 \\ &&\le Lip(b)\exp\biggl( Lip(b)t\biggr) \int_0^t | x_{s,\theta}^{n}-x_{s,\theta}^{n+1}|^2 ds \\ &&\le \frac{\biggl(Lip(b)t\biggr)^n \exp\biggl( nLip(b)t\biggr)}{n!} | x_{t,\theta}^{1}-x_{t,\theta}^{2}|^2 . \end{eqnarray*} As $n \rightarrow \infty$, $\Phi$ is a contraction mapping, and the SDE (\ref{mapeqn}) has a unique fixed point solution. Therefore, there exists a unique optimally controlled state process $x_{t,\theta}$ to problem (\ref{Control}). Furthermore, the optimal Markovian control $(\xi_{\cdot,\theta}^+,\xi_{\cdot,\theta}^- )$ to (\ref{Control}) also uniquely exists. \end{proof} Next, we establish the regularity of the value function to problem (\ref{Control}). \begin{proposition}\label{strictconvex} Assume \emph{(A1)--(A4)}. For any fixed $t \in [0,T]$, the value function $v_{\theta} (t, x|\{\mu_t\}) $ for problem (\ref{Control}) is strictly convex in $x$. \end{proposition} \begin{proof} Fix any $x_1,x_2\in \mathbb{R}$ and any $\lambda \in [0,1]$. For any $ (\xi_\cdot^{1,+}, \xi_\cdot^{1,-}) \in \mathcal{U}_{\theta} $ and $ (\xi_\cdot^{2,+}, \xi_\cdot^{2,-}) \in \mathcal{U}_{\theta} $, by the convexity of $f$, \begin{align*} & \lambda J_{\theta} (s,x_1 , \xi_\cdot^{1,+}, \xi_\cdot^{1,-}|\{\mu_t\}) \\ &\hspace{20pt}+ (1-\lambda) J_{T,\theta} (s,x_2 , \xi_\cdot^{2,+}, \xi_\cdot^{2,-}| \{\mu_t\}) \\ \geq & J_{\theta} \biggl(s,\lambda x_1 + (1-\lambda) x_2 ,\biggl.\\ &\hspace{20pt}\biggl.\lambda \xi_\cdot^{1,+} + (1-\lambda) \xi_\cdot^{1,+},\lambda\xi_\cdot^{2,+} + (1-\lambda)\xi_\cdot^{2,-}| \{\mu_t\}\biggl) \\\geq & v_{\theta} (s, \lambda x_1 + (1-\lambda) x_2| \{\mu_t\}). \end{align*} Since this holds for any $ (\xi_\cdot^{1,+}, \xi_\cdot^{1,-}) \in \mathcal{U}_{\theta} $ and $ (\xi_\cdot^{2,+}, \xi_\cdot^{2,-}) \in \mathcal{U}_{\theta} $, \begin{align*} &\lambda v_{\theta} (s,x_1|\{\mu_t\}) + (1-\lambda) J_{\theta} (s,x_2 , \xi_\cdot^{2,+}, \xi_\cdot^{2,-}|\{\mu_t\}) \\ &\hspace{30pt}\geq v_{\theta} (s, \lambda x_1 + (1-\lambda) x_2|\{\mu_t\})\\ &\lambda v_{\theta} (s,x_1|\{\mu_t\}) + (1-\lambda) v_{\theta} (s,x_2| \{\mu_t\})\\ &\hspace{30pt}\geq v_{\theta} (s, \lambda x_1 + (1-\lambda) x_2|\{\mu_t\}). \end{align*} Hence, $v_{\theta} (s, x|\{\mu_t\})$ is convex in $x$. By Proposition \ref{optimization}, $v_{\theta} (s, x| \{\mu_t\})$ is a $\mathcal{C}^{1,2} ([0,T]\times \mathbb{R})$ solution to the equation \begin{align*} - \partial_t v_{\theta} =&\min \biggl\lbrace ( \partial_x v_{\theta} + \gamma_1)\theta,(- \partial_x v_{\theta} + \gamma_2)\theta, 0 \biggl\rbrace \\ &+b(x,\mu ) \partial_x v_{\theta} +f(x ,\mu )+ \frac{\sigma^2}{2}\partial_{xx} v_{\theta}. \end{align*} Since $f(x,\mu)$ is not linear in $x$, the solution to this equation is also nonlinear in $x$. Hence, $v_{\theta} (s, x|\{\mu_t\})$ is strictly convex. \end{proof} With this convexity, we have \begin{theorem} \label{thetainfty} Assume \emph{(A1)--(A4)}. Then for any $(s,x) \in [0,T]\times \mathbb{R}$, as $\theta\rightarrow \infty$, the value function $v_{\theta} (s,x|\{ \mu_t\}) $ of \emph{(\ref{Control})} converges to the value function $v(s,x|\{ \mu_t\}) $ of \emph{(\ref{Control-FV})}. Moreover, there exists an optimal control of a feedback form for \emph{(\ref{Control-FV})}. \end{theorem} \begin{proof}Fix $\{\mu_t\} \in \mathcal{M}_{[0,T]}$. For any $(\zeta_{\cdot}^+,\zeta_{\cdot}^-) \in \mathcal{U}$, since each path of a finite variation process is almost everywhere differentiable, there exists a sequence of bounded velocity functions which converges to the path as $\theta \rightarrow \infty$. Hence, there exists a sequence $\{(\zeta_{\cdot,\theta}^+ ,\zeta_{\cdot,\theta}^-)\}_{\theta \in [0,\infty)}$ such that $(\zeta_{\cdot,\theta}^+,\zeta_{\cdot,\theta}^- ) \in \mathcal{U}_{\theta}$ and $\mathbb{E} \int_0^T |\dot{\zeta}_{t,\theta}^+ dt - d\zeta_{t}^+ | \rightarrow 0 , \mathbb{E} \int_0^T |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^- | \rightarrow 0$ as $\theta \rightarrow \infty$. Define $\epsilon_\theta$ as \begin{align}\label{epsilontheta} \hspace{-25pt}\epsilon_\theta = O\biggl( \mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^- dt-d\zeta_{t}^{-}| \biggl), \end{align} and $\epsilon_\theta \rightarrow 0$ as $\theta \rightarrow \infty$. Denote \begin{align*} d\hat{x}_{t,\theta} & = (b(\hat{x}_{t,\theta}, \mu_t ) +\dot{\zeta}_{t,\theta}^+ - \dot{\zeta}_{t,\theta}^-) dt + \sigma dW_t, \ \ \hat{x}_{s,\theta}= x, \\ d\hat{x}_t & = b(\hat{x}_t,\mu_t) dt+ \sigma dW_t + d\zeta_{t}^{+}- d\zeta_{t}^{-}, \ \ \hat{x}_{s-} = x. \end{align*} Then, for any $\tau \in [s,T]$, \begin{align*} |\hat{x}_{\tau,\theta} - \hat{x}_\tau| & \le \int_s^\tau Lip(b)|\hat{x}_{t,\theta} - \hat{x}_t| dt \\ &+ \int_s^\tau |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \int_s^\tau |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^{-}| . \end{align*} By Gronwall's inequality, \begin{align*} &\mathbb{E} |\hat{x}_{\tau,\theta} - \hat{x}_\tau| \le O\biggl(\mathbb{E}\int_0^\tau |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \mathbb{E}\int_0^\tau |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^{-}| \biggl). \end{align*} Consequently, \begin{align*} & \biggl|J_(s,x,\zeta_{t}^+,\zeta_{t}^- |\{\mu_t\} ) - J_{\theta} (s,x,\zeta_{t,\theta}^+,\zeta_{t,\theta}^-|\{\mu_t\} )\biggl| \\ \le & \ \mathbb{E}\biggl[ \biggl| \int_s^T f(\hat{x}_t, \mu_t) - f(\hat{x}_{t,\theta},\mu_t) \\ &\hspace{30pt}+\gamma_1 d\zeta_{t}^+ + \gamma_2 d\zeta_{t}^- - \gamma_1 \dot{\zeta}_{t,\theta}^+dt - \gamma_2 \dot{\zeta}_{t,\theta}^-dt \biggl| \biggl] \\ \le & \ O\biggl(\mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^- dt-d\zeta_{t}^{-}| \biggl). \end{align*} Therefore, $\biggl |v (s,x|\{ \mu_t\}) - v_{\theta} (s,x|\{ \mu_t\})\biggl| \rightarrow 0 \text{ as } \theta \rightarrow 0$. Now a similar argument as in Corollary (4.11) \cite{MT1989} shows the existence of a feedback control for \eqref{Control-FV}. \end{proof} \subsection{Game (MFG-BD) }\label{proof} Our next step is to analyze the game (MFG-BD). In particular, we see that \begin{theorem} Assume \emph{(A1)--(A6)}. Then there exists a unique solution $((\xi_\cdot^{*+},\xi_\cdot^{*-}),\{\mu_t^*\} )$ of the game \eqref{MFGbounded1}. Moreover, the corresponding value function $v_{\theta}(s,x)$ for the game \eqref{MFGbounded1} is in $C^{1,2}( [0,T] \times \mathbb{R})$ with a polynomial growth. \label{mainthm} \end{theorem} The proof of the existence of the MFG solution proceeds as follows. First, from Proposition \ref{optimization} we see that for any given fixed $\{\mu_t\}$ there exists a unique optimal control function as $\varphi_\theta(t,x |\{\mu_t\} ) $. Now, one can define a mapping $\Gamma_1 $ from $\mathcal{M}_{[0,T]}$ to a class of pairs of the optimal control function $\varphi_{\theta}$ and the fixed flow of probability measures $\{\mu_t\}$ such that $\Gamma_1 (\{\mu_t\}) = \biggl( \varphi_\theta(t,x|\{\mu_t\}) , \{\mu_t\}\biggl).$ Moreover, by Proposition \ref{optimization} the optimally controlled process $x_{t,\theta} $ under the fixed $\{\mu_t\}$ exists uniquely with $x_{s,\theta} = x,$ \begin{align*} d x_{t,\theta} & = \biggl( b(x_{t,\theta},\mu_t) + \varphi_\theta(t,x_{t,\theta}|\{\mu_t\}) \biggl) dt + \sigma dW_t. \end{align*} Consequently, we can define $\Gamma_2 $ so that $\Gamma_2 \biggl( \varphi_\theta(t,x|\{\mu_t\}), \{\mu_t\}\biggl) = \{ \tilde{\mu}_t \} ,$ where $ \tilde{\mu}_t $ is the probability measure of $x_{t,\theta}$ for each $t\in [0,T]$. Now, define a mapping $\Gamma$ as $\Gamma(\{ \mu_t\})= \Gamma_2 \circ \Gamma_1 (\{\mu_t\}) = \{ \tilde{\mu}_t\}.$ We will use the Schauder fixed point theorem~\cite[Theorem 4.1.1]{Smart1980} to show the existence of a fixed point. The key is to prove that $\Gamma$ is a continuous mapping of $\mathcal{M}_{[0,T]} $ into $ \mathcal{M}_{[0,T]}$, and the range of $\Gamma$ is relatively compact \cite{B2013}. \begin{proposition}\label{MM} Assume \emph{(A1)--(A4)}. $\Gamma$ is a mapping from $\mathcal{M}_{[0,T]}$ to $\mathcal{M}_{[0,T]}$. \end{proposition} \begin{proof} For any $\{\mu_t\}$ in $\mathcal{M}_{[0,T]}$, let us prove that $\{\tilde{\mu}_t\} = \Gamma (\{\mu_t\})$ is also in $\mathcal{M}_{[0,T]}$. Without loss of generality, suppose $s > t$, and $$x_{s} = x_t + \int_t^{s} \biggl(b(x_r,\mu_r)+ \varphi_\theta(r,x_r|\{\mu_t\} )\biggl) dr + \int_t^{s}\sigma dW_r.$$ Since $b(x, \mu ) $ is bounded, $|\varphi_\theta(s,x_s|\{\mu_t\} )| \leq \theta$, and $\mathbb{E}\biggl| (b(x_r,\mu_r)+ \varphi_\theta(r,x_r|\{\mu_t\} ))\biggl| \le M $ for large $M$ and for any $r \in [0,T]$, \begin{align*} &D^1(\tilde{\mu}_s,\tilde{\mu}_t ) \leq \mathbb{E} | x_s -x_t | \leq \mathbb{E} \int_t^s \biggl|b(x_r,\mu_r)+ \varphi(r,x_r |\{\mu_t\})\biggl| dr \\ &\hspace{30pt} + \sigma \mathbb{E} \sup_{r \in [t,s]}\limits |W_r-W_t | \leq M|s-t| + \sigma |s-t|^{\frac{1}{2}}. \end{align*} Therefore, $ \sup_{s\neq t}\frac{ D^1(\tilde{\mu}_t,\tilde{\mu}_s) }{|t-s|^{\frac{1}{2}}} \leq c$. For any $t \in [0,T]$, since $|b(x,\mu)|$ is bounded, \begin{align*} \int_\mathbb{R} |x|^2 \tilde{\mu}_t(dx) &\leq 2 \mathbb{E} \biggl[ \int_\mathbb{R} |x|^2 d \tilde{\mu}_0 + c_1^2 T^2 + \sigma^2 T \biggl], \end{align*} and $\sup_{t \in [0,T]} \limits \int_\mathbb{R} |x|^2 \tilde{\mu}_t(dx) \leq c$. \end{proof} \begin{proposition} \label{continuous} Assume \emph{(A1)--(A6)}. $\Gamma : \mathcal{M}_{[0,T]} \rightarrow \mathcal{M}_{[0,T]}$ is continuous. \end{proposition} \begin{proof} Let $ \{ \mu_t^n \} \in \mathcal{M}_{[0,T]}$ for $n = 1, \ldots,$ be a sequence of flows of probability measures $d_\mathcal{M}(\{\mu_t^n\},\{\mu_t\}) \rightarrow 0 $ as $n \rightarrow \infty$, for some $\{\mu_t\} \in \mathcal{M}_{[0,T]}$. Fix $\tau \in [0,T)$. By Proposition \ref{optimization}, for each $\{\mu_t^n\} $, problem (\ref{Control}) has a value function $v_{\theta}^n(s,x | \{\mu_t^n\})$ with the optimal control $\varphi^n(t,x)$. Let $\{x_t^n\}$ be the corresponding optimal controlled process: $$\begin{aligned} &dx_t^n = \biggl(b(x_t^n ,\mu_t^n)+\varphi^n_\theta(t,x_t^n | \{ \mu_t^n \} )\biggl)dt + \sigma dW_t, \\ & x_\tau ^n =x, \ \ \tau \leq t \leq T,\, \end{aligned}$$ Let $ \{\tilde{\mu}_t^n\}$ be a flow of probability measures of $\{x_t^n\}$, then $\Gamma (\{\mu_t^n\} ) = \{\tilde{\mu}_t^n\}$. Similarly, for each $\{\mu_t \} $, problem (\ref{Control}) has a value function $v_{\theta} (s,x|b\{\mu_t \})$ with the optimal control $\varphi_\theta(t,x | \{ \mu_t \})$. Let $\{x_t\}$ be the corresponding optimal controlled process: $$\begin{aligned} & dx_t = \biggl(b(x_t ,\mu_t)+ \varphi_\theta(t,x_t | \{ \mu_t \})\biggl)dt + \sigma dW_t, \\ & x_\tau =x, \ \ \tau \leq t \leq T. \end{aligned}$$ Let $ \{\tilde{\mu}_t\}$ be a flow of probability measures of $\{x_t\}$, then $\Gamma (\{\mu_t\} ) = \{\tilde{\mu}_t\}$. To show that $\Gamma$ is continuous, we need to show $\lim_{n\to\infty}d_{\mathcal{M}} \biggl(\{\tilde{\mu}_t^n \}, \{\tilde{\mu}_t\}\biggl) = 0$. This is established in four steps. Step 1. We first establish some relation between $D^2 (\{\tilde{\mu}_t^n \}, \{\tilde{\mu}_t\})$ and $D^2(\{\mu_t^n \}, \{\mu_t\})$. Note here $ D^1 (\tilde{\mu}_t,\tilde{\mu}_t^n) \le D^2(\tilde{\mu}_t,\tilde{\mu}_t^n) $. For any $s \in [\tau ,T]$, \begin{align*} d(x_s -x_s^n)= &\biggl(b(x_s,\mu_s)-b(x_s^n,\mu_s^n) \\ &\hspace{10pt}+ \varphi_\theta(s, x_s | \{ \mu_t \})- \varphi^n_\theta(s,x^n_s | \{ \mu_t^n \})\biggl) ds. \end{align*} Then, for any $t \in [\tau ,T]$, \begin{align*} &|x_t-x^n_t|^2 = 2 \int_\tau^t \biggl(b(x_s ,\mu_s )-b(x^n_s, \mu_s^n)\\ &\hspace{10pt}+ \varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \}) \biggl)(x_s-x_s^n)ds. \end{align*} \begin{align*} & Lip(b)\biggl(|x_s-x_s^n|+D^1(\mu_s,\mu_s^n)\biggl)|x_s-x_s^n|\\ & \leq Lip(b_0)|x_s - x_s^n|^2 \\ &\hspace{10pt}+ \frac{Lip(b)}{2}\biggl((D^1(\mu_s,\mu_s^n))^2+ |x_s-x_s^n|^2\biggl). \end{align*} By Assumption (A6), \begin{align*} & ( \varphi_\theta(s,x_s | \{ \mu_t \})- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \}))(x_s -x_s^n) \\ \leq & \ \biggl( \varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )\\ &\hspace{10pt}+ \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \})\biggl)(x_s -x_s^n) \\ \leq & \ \frac{1}{2}\biggl(|\varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )|^2 +|x_s -x_s^n|^2\biggl). \end{align*} Consequently, \begin{align*} |x_t-x_t^n|^2 \leq & \int_\tau^t \biggl[ ( 3Lip(b )+1) |x_s-x_s^n|^2 \\ & \hspace{10pt}+ Lip(b_0) (D^1(\mu_s,\mu_s^n))^2 \\ & \hspace{10pt}+ |\varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )|^2 \biggl] ds . \end{align*} By Gronwall's inequality, \begin{equation}\label{ineqcontinuity1} \begin{aligned} &(D^2(\tilde{\mu}_t,\tilde{\mu}_t^n))^2 \leq c \int_\tau^t Lip(b ) (D^1(\mu_s,\mu_s^n))^2 \\ & \hspace{10pt}+ \mathbb{E}\biggl[|\varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )|^2 \biggl] ds , \end{aligned} \end{equation} for some constant $c$ depending on $T$ and $Lip(b )$. Step 2. Now we prove that for any $(t,x)\in [\tau,T]\times \mathbb{R} $, $\displaystyle{\lim_{n\to\infty}\partial_x v_{\theta}^n(t,x|\{\mu_t^n\})= \partial_x v(t,x|\{\mu_t \})}$. By Proposition \ref{optimization}, $v_{\theta}$ and $v_{\theta}^n$ are the solutions to the HJB Eqn. (\ref{HJBHJBHJB}). For notation simplicity, let us denote \begin{align*} &\varphi_{1, \theta}(s,x| \{ \mu_t \}) = \max \{\varphi_\theta(s,x| \{ \mu_t \}),0\}, \\ &\varphi_{2,\theta}(s,x| \{ \mu_t \}) =- \min \{\varphi_\theta(s,x| \{ \mu_t \}),0\},\\ &\varphi^n_{1, \theta}(s,x | \{ \mu^n_t \}) = \max \{\varphi^n_\theta(s,x | \{ \mu^n_t \}),0\}, \\ &\varphi^n_{2, \theta}(s,x | \{ \mu^n_t \}) = - \min \{\varphi^n_\theta(s,x | \{ \mu^n_t \}),0\}. \end{align*} Since $\varphi_{1, \theta| \{ \mu_t \}},\varphi_{2, \theta| \{ \mu_t \}}$ are optimal controls, using It\^o's formula and the HJB Eqn. (\ref{HJBHJBHJB}), we obtain \begin{equation} \label{eqeqeq} \begin{aligned} &-v_{\theta}(\tau,x|\{\mu_t\} ) \\ &\hspace{10pt}= v_{\theta}(T,x_T|\{\mu_t \}) - v_{\theta}(\tau,x|\{\mu_t\} )\\ &\hspace{10pt}= - \int_\tau^T \biggl( f(x_s,\mu_s)+ \gamma_1 \varphi_{1, \theta} (s,x_s| \{ \mu_t \}) \end{aligned} \end{equation} Similarly, for any $n \in \mathbb{N}$, applying It\^o's formula to $v_{\theta}^n(s,x)$ and $\{x_t\}$ yields \begin{align*} &v_{\theta}^n(T,x_T|\{\mu_t^n\}) - v_{\theta}^n(\tau,x|\{\mu_t^n\})\\ = &\int_\tau^T \partial_t v^n_{\theta} (s,x_s|\{\mu_t^n\}) \\ &+ \biggl[b(x_s,\mu_s) + \varphi_\theta(s,x_s| \{ \mu_t \}) \biggl] \partial_x v_{\theta}^n(s,x_s|\{\mu_t^n\}) \\ &+ \frac{\sigma^2}{2} \partial_{xx} v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds\\ &+\int_\tau^T\sigma \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) dW_s\\ =& \int_\tau^T \partial_t v^n_{\theta} (s,x_s|\{\mu_t^n\}) + \\ & \biggl[ b(x_s,\mu_s^n) + \varphi^n_\theta(s,x_s | \{ \mu^n_t \} ) \biggl] \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\})\\ & + \frac{\sigma^2}{2} \partial_{xx} v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds\\ & + \int_\tau^T\sigma \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) dW_s \\& - \int_\tau^T\biggl[b(x_s,\mu_s^n) -b(x_s,\mu_s) +\varphi^n_\theta(s,x_s | \{ \mu^n_t \}) \\ &-\varphi_\theta (s,x_s| \{ \mu_t \})\biggl]\partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds. \end{align*} Now combined with HJB (\ref{HJBHJBHJB}), it is clear \begin{equation}\label{eqnn} \begin{aligned} &v_{\theta}^n(\tau,x|\{\mu_t^n\} ) = \int_\tau^T \biggl( f(x_s ,\mu_s^n ) \\ &\hspace{10pt}+ \gamma_1 \varphi_{1, \theta}^n (s,x_s | \{ \mu^n_t \} )+ \gamma_2 \varphi_{2, \theta}^n (s,x_s | \{ \mu^n_t \} ) \biggl) ds \\ &\hspace{10pt}- \int_\tau^T \sigma \partial_x v^n_{\theta} (s,x_s|\{\mu_t^n\} ) dW_s \\ &\hspace{10pt} + \int_\tau^T \biggl[b(x_s,\mu_s^n) -b(x_s,\mu_s)+ \varphi^n_\theta(s,x_s | \{ \mu^n_t \}) \\ &\hspace{10pt} -\varphi_\theta (s,x_s| \{ \mu_t \})\biggl] \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds. \end{aligned} \end{equation} Denote \begin{align*} &\hspace{-40pt}H(s,x ) = \inf_{\dot{\xi}^\pm\in [0,\theta]} \{ ( \dot{\xi}^+-\dot{\xi}^-)\partial_x v_{\theta}(s,x|\{\mu^n_t\}) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^-\},\\ &\hspace{-40pt}H^n(s,x ) = \inf_{\dot{\xi}^\pm\in [0,\theta]}\{ ( \dot{\xi}^+-\dot{\xi}^-)\partial_x v^n_{\theta} (s,x|\{\mu_t^n\}) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^-\}. \end{align*} Then, for any $\dot{\xi}^+,\dot{\xi}^- \in [0,\theta]$, \begin{align*} &\biggl| \biggl[( \dot{\xi}^+-\dot{\xi}^-)\partial_x v_{\theta}(s,x|\{\mu_t \} ) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^- \biggl]\\ &- \biggl[ (\dot{\xi}^+-\dot{\xi}^-)\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^-\biggl] \biggl| \\& \leq 2\theta \biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta} (s,x|\{\mu_t^n\}) \biggl|. \end{align*} Hence, for any $s,x \in [\tau,T]\times \mathbb{R}$, $$ \begin{aligned}&\hspace{-30pt}|H(s,x)-H^n(s,x)| \leq 2\theta \biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\})\biggl|.\end{aligned}$$ By definition, \begin{align*} 2\theta &\biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl| \\ \geq & \biggl| \biggl( \varphi_{1, \theta}(t,x| \{ \mu_t \}) -\varphi_{2, \theta}(t,x| \{ \mu_t \}) \biggl)\partial_x v_{\theta}(s,x|\{\mu_t \})\\ & + \gamma_1 \varphi_{1, \theta}(t,x| \{ \mu_t \})+ \gamma_2 \varphi_{2, \theta}(t,x| \{ \mu_t \}) \\& - \biggl( \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \}) -\varphi_{2, \theta}^n(t,x | \{ \mu^n_t \}) \biggl)\partial_x v^n_{\theta}(s,x|\{\mu_t^n\} ) \\ &+ \gamma_1 \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \})+\gamma_2 \varphi_{2, \theta}^n(t,x | \{ \mu^n_t \}) \biggr| \\ \geq & \biggl| \biggl( \gamma_1+\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{1, \theta}(t,x| \{ \mu_t \})- \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \})\biggl) \\ + &\biggl( \gamma_2 -\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl)\biggl(\varphi_{2, \theta}(t,x| \{ \mu_t \})- \varphi_{2, \theta}^n(t,x | \{ \mu^n_t \})\biggl) \biggr| \\ & - \theta \biggl| \partial_x v_{\theta} (s,x|\{\mu_t \})-\partial_x v^n_{\theta} (s,x |\{\mu_t^n\}) \biggr|. \end{align*} Hence, \begin{equation} \label{eqeqeqeq} \begin{aligned} 3\theta & \biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl| \\ & \geq \biggl| \biggl( \gamma_1 +\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl)\biggl(\varphi_{1, \theta}(s,x| \{ \mu_t \})- \varphi_{1, \theta}^n(s,x | \{ \mu^n_t \})\biggl) \\& + \biggl( \gamma_2 -\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{2, \theta}(s,x| \{ \mu_t \})- \varphi_{2, \theta}^n(s,x | \{ \mu^n_t \})\biggl) \biggr|. \end{aligned} \end{equation} Similarly, \begin{equation} \label{eqeqeqeqeq} \begin{aligned} 3\theta &\biggl| \partial_x v_{\theta}(s,x|\{\mu_t\})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl| \\ &\geq \biggl| \biggl( \gamma_1 +\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl)\biggl(\varphi_{1, \theta}(s,x| \{ \mu_t \})- \varphi_{1, \theta}^n(s,x | \{ \mu^n_t \})\biggl) \\& + \biggl( \gamma_2 -\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl)\biggl(\varphi_{2, \theta}(s,x| \{ \mu_t \})- \varphi_{2, \theta}^n(s,x | \{ \mu^n_t \})\biggl) \biggr|. \end{aligned} \end{equation} Step 3. We further show $\lim_{n\to\infty}\varphi^n_\theta( s,x | \{ \mu^n_t \}) = \varphi_\theta(s,x| \{ \mu_t \})$ for any $ s,x \in [0,T]\times \mathbb{R}$. Indeed, from Eqns. (\ref{eqeqeq}) and (\ref{eqnn}) and by It\^o's isometry and Cauchy--Schwartz inequality, \begin{align*} &\biggl( v_{\theta}(\tau,x |\{\mu_t\}) - v^n_{\theta}(\tau,x |\{\mu_t^n\})\biggl)^2 \\ + & \sigma^2 \mathbb{E}\biggl[\int_\tau^T\biggl(\partial_x v_{\theta}(s,x_s|\{\mu_t \}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl)^2 ds \biggl] \\ \leq & 3(T-\tau) \mathbb{E} \biggl[ \int_\tau^T \biggl(f(x_s ,\mu_s) - f(x_s ,\mu_s^n )\biggl)^2 \\ & + \biggl( (b(x_s,\mu_s) -b(x_s,\mu_s^n) ) \partial_x v^n_{\theta}(s,x_s|\{\mu_t \}) \biggl)^2 \\ & \hspace{-20pt} + \biggl( (\gamma_1+ \partial_x v^n_{\theta}(s,x |\{\mu_t^n\})) (\varphi_{1, \theta} (s,x_s| \{ \mu_t \} ) -\varphi_{1, \theta}^n (s,x_s | \{ \mu^n_t \} )) \\& \hspace{-30pt} + ( \gamma_2 - \partial_x v^n_{\theta}(s,x|\{\mu_t^n\} ))(\varphi_{2, \theta}(s,x_s| \{ \mu_t \} ) - \varphi_{2, \theta}^n (s,x_s | \{ \mu^n_t \} )) \biggl)^2 ds \biggl] \\ \hspace{30pt} \leq & 3(T-\tau) \mathbb{E} \biggl [ \int_\tau^T \biggl( Lip(f) D^1(\mu_s ,\mu_s^n) \biggl)^2 \\ &\hspace{-30pt} + \biggl( Lip(b) D^1(\mu_s ,\mu_s^n)| \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\})| \biggl )^2 \\& \hspace{-30pt}+ \biggl (3\theta (\partial_x v_{\theta}(s,x_s|\{\mu_t^n\}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\})) \biggl)^2 ds \biggl]. \end{align*} Let $\delta = \frac{\sigma^2}{54\theta^2}$. Then, for any $\tau \in [T-\delta, T]$, \begin{align*} & \biggl( v_{\theta}(\tau,x|\{\mu_t \} ) - v^n_{\theta}(\tau,x |\{\mu_t^n\}) \biggl)^2 \\ + \frac{\sigma^2}{2} &\mathbb{E} \biggl[\int_\tau^T(\partial_x v_{\theta}(s,x_s|\{\mu_t \}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) )^2 ds \biggl] \\ \leq & 3(T-\tau) \mathbb{E} \biggl[ \int_\tau^T \biggl( Lip(f) D^1(\mu_s ,\mu_s^n) \biggl)^2 \\ &\hspace{10pt}+ \biggl( Lip(b) D^1(\mu_s ,\mu_s^n)| \partial_x v^n(s,x_s|\{\mu_t^n\})| \biggl)^2 ds \biggl]. \end{align*} Hence, for any $\tau \in [T-\delta, T]$, $ v_{\theta}(\tau,x|\{\mu_t \} ) - v^n_{\theta}(\tau,x|\{\mu_t^n\} ) \rightarrow 0,$ and $ \mathbb{E} \biggl[\int_\tau^T \biggl(\partial_x v_{\theta}(s,x_s|\{\mu_t \}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl)^2 ds \biggl] \rightarrow 0$ as $n \rightarrow \infty.$ Since $\delta >0$, one can repeat this process for $[T-2\delta, T-\delta]$. Proceeding recursively, one can show that for any $(t,x) \in [0,T]\times \mathbb{R}$, $ v^n_{\theta}(t,x|\{\mu_t^n\} ) \rightarrow v_{\theta}(t,x |\{\mu_t \}), $ and $ \mathbb{E} \biggl[\int_0^T \biggl(\partial_x v_{\theta}(s,x_s|\{\mu_t\}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl )^2 ds \biggl] \rightarrow 0 \text{ as } n \rightarrow \infty.$ Hence, for any $(s,x) \in [0,T]\times \mathbb{R}$, $\lim_{n\to\infty}\partial_x v^n_{\theta} (s,x|\{\mu_t^n\}) =\partial_x v_{\theta}(s,x|\{\mu_t\})$. By Proposition \ref{strictconvex}, $\partial_x v^n_{\theta } (s,x|\{\mu_t^n\}),\partial_x v_{\theta}(s,x|\{\mu_t \}) $ are strictly increasing in $x$, and by definition of $\varphi^n_\theta$ and $\varphi_\theta$, $\varphi^n_\theta( s,x | \{ \mu^n_t \}) $ converges to $\varphi_\theta(s,x| \{ \mu_t \})$ for any $(s,x) \in [0,T]\times \mathbb{R} $. Step 4. We now show $\displaystyle{\lim_{n\to\infty}d_\mathcal{M} \biggl(\{\tilde{\mu}_t\},\{\tilde{\mu}_t^n\} \biggl) =0}$. From previous steps, $\varphi^n_\theta( s,x_s | \{ \mu^n_t \})\rightarrow\varphi_\theta(s,x_s| \{ \mu_t \})$ a.s. as $n\rightarrow \infty$, and by the Dominated Convergence Theorem in the $L^2$ space, for each $s\in [0,T]$, $\mathbb{E} \biggl|\varphi^n_\theta( s,x_s | \{ \mu^n_t \})-\varphi^n_\theta(s,x_s | \{ \mu^n_t \}) \biggl|^2 \rightarrow 0.$ Hence, by inequality (\ref{ineqcontinuity1}), $D^2(\tilde{\mu}_t,\tilde{\mu}_t^n) \rightarrow 0$ for any $t \in [0,T]$, $d_\mathcal{M} \biggl(\{\tilde{\mu}_t\},\{\tilde{\mu}_t^n\} \biggl) \rightarrow 0 \text{ as } n \rightarrow \infty.$ That is, $\Gamma $ is continuous. \end{proof} \begin{proposition}\label{uniq} Assume \emph{(A1)--(A6)}. Then $\Gamma:\mathcal{M}_{[0,T]}\rightarrow \mathcal{M}_{[0,T]}$ has a fixed point, and the game \eqref{MFGbounded1} has a unique solution. \end{proposition} \begin{proof} As in the proof in Section 3.2 and the proof of Lemma 5.7 in \cite{Cardaliaguet2013}, the range of the mapping $\Gamma $ is relatively compact, and by Proposition \ref{continuous}, $\Gamma$ is a continuous mapping. Hence, due to the Schauder fixed point theorem~\cite[Theorem 4.1.1]{Smart1980}, $\Gamma$ has a fixed point such that $\Gamma (\{\mu_t\}) = \{\mu_t\} \in \mathcal{M}_{[0,T]}$. By Assumption (A5), there exists at most one fixed point \cite{Cardaliaguet2013, LL2007}. Therefore, there exists a unique fixed point solution of flow of probability measures $\{\mu_t^*\}$. By definition of the solution to a MFG and Proposition \ref{optimization}, the optimal control is also unique. \end{proof} \subsection{Proof of Main Theorem} Suppose that $ \biggl((\xi_{\cdot,\theta}^+,\xi_{\cdot,\theta}^- ),\{\mu_{t,\theta} \} \biggl)$ is a solution to (\ref{MFGbounded1}) with a given bound $\theta$, and $x_{t,\theta}$ is the optimally controlled process: \begin{align*} & dx_{t,\theta} = \biggl(b(x_{t,\theta}, \mu_{t,\theta}) +\varphi_{1,\theta}(t,x_{t,\theta}|\{\mu_{t,\theta}\}) \\ & \ \ \ \ - \varphi_{2,\theta}(t,x_{t,\theta}|\{\mu_{t,\theta}\}) \biggl) dt + \sigma dW_t, \,x_{s,\theta} = x, \end{align*} where $\dot{\xi}_{t,\theta}^+-\dot{\xi}_{t,\theta}^- = \varphi_\theta (t,x|\{\mu_{t, \theta}\}) = \varphi_{1,\theta} (t,x |\{\mu_{t,\theta}\}) - \varphi_{2,\theta}(t,x |\{\mu_{t,\theta}\}) $ is the optimal control function. Note that we explicit write $\mu_{t, \theta}$ here to emphasize the dependence on $\theta$ for the game (MFG-BD). Given this $\{\mu_{t,\theta}\}$, let $v(s,x|\{ \mu_{t,\theta}\}) $ be the value function of the stochastic control problem (\ref{Control-FV}), and let $x_{t}$ be the optimal controlled process \begin{align*} dx_{t} = b(x_{t}, \mu_{t,\theta})dt + \sigma dW_t +d\xi_{t}^+ -d\xi_{t}^- , \quad x_{s-} = x, \end{align*} where the optimal control $\xi_{t}$ is of a feedback form. Hence, denote $$\begin{aligned} d \varphi (t,x|\{\mu_{t,\theta}\})&=d \varphi_{1} (t,x|\{\mu_{t,\theta}\})-d \varphi_{2} (t,x|\{\mu_{t,\theta}\})= d\xi_{t}^+ -d\xi_{t}^-\end{aligned}$$ as the optimal control function for the stochastic control problem of (\ref{Control-FV}) with the fixed $\{\mu_{t,\theta}\}$. Now, let $x_{s,\theta}^i = x$, $x_{s-}^i = x,$ and $x_{s,\theta}^{i, N} = x$, define \begin{align*} dx_{t,\theta}^i = & \biggl(b(x_{t,\theta}^i, \mu_{t,\theta}) +\varphi_{1,\theta}(t,x_{t,\theta}^i|\{\mu_{t,\theta}\} )\\ & - \varphi_{2,\theta}(t,x_{t,\theta}^i|\{\mu_{t,\theta}\} ) \biggl) dt + \sigma dW_{t }^i, \\ dx_{t}^i = & b(x_{t}^i, \mu_{t,\theta})dt +d \varphi_{1} (t,x_{t}^i|\{\mu_{t,\theta}\} )\\ & -d \varphi_{2} (t,x_{t}^i|\{\mu_{t,\theta}\} ) + \sigma dW_t^i, \\ \hspace{-50pt} dx_{t,\theta}^{i, N} =& \biggl( \frac{1}{N} \sum_{ j = 1}^N b_0(x_{t,\theta}^{i, N}, x_{t,\theta}^{j, N} ) +\varphi_{1,\theta}(t,x_{t,\theta}^{i, N}|\{\mu_{t,\theta}\} ) \\ &- \varphi_{2,\theta}(t,x_{t,\theta}^{i, N}|\{\mu_{t,\theta}\} ) \biggl) dt + \sigma dW_t^i. \end{align*} Recall that $(\mu_{t,\theta}, \varphi_\theta)$ is the solution to (\ref{MFGbounded1}) and $x_{t,\theta}^i$ are i.i.d., and $\mu_{t,\theta}$ is the probability measure of $x_{t,\theta}^i$ for any $i = 1,\ldots, N$. We first establish some technical Lemmas. \begin{lemma} $\sup_{1\leq i\leq N} \mathbb{E} \sup_{s\leq t\leq T} \limits \biggl|x_{t,\theta}^i-x_{t,\theta}^{i,N} \biggl|^2 = O \biggl(\frac{1}{N} \biggl)$. \label{Nash1} \end{lemma} \begin{proof} \begin{align*} & \hspace{-30pt}d(x_{t,\theta}^i-x_{t,\theta}^{i,N} )= \biggl( \int_\mathbb{R} b_0(x_{t,\theta}^i,y) \mu_{t,\theta} (dy)\\ &\hspace{-30pt}-\frac{\sum_{j=1}^N b_0( x_{t,\theta}^{i,N},x_{t,\theta}^{j,N}) }{N} + \varphi_\theta (t, x_{t,\theta}^i|\{\mu_{t,\theta}\} )\\ &\hspace{-30pt}- \varphi_\theta(t, x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} )\biggl) dt, \end{align*} and \begin{align*} & \hspace{-50pt}d(x_{t,\theta}^i-x_{t,\theta}^{i,N} )^2 = \biggl\lbrace 2 (x_{t,\theta}^i-x_{t,\theta}^{i,N} ) \biggl(\int_\mathbb{R} b_0(x_{t,\theta}^i,y) \mu_{t,\theta} (dy)\\ &\hspace{-30pt}-\frac{\sum_{j=1}^N b_0(x_{t,\theta}^{i,N} ,x_{t,\theta}^{j,N} )}{N} + \varphi_\theta (t, x_{t,\theta}^i|\{\mu_{t,\theta}\} ) \\ & \hspace{-30pt} - \varphi_\theta(t, x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) \biggl) \biggr\rbrace dt. \end{align*} By Assumption (A6), for any $t\in[s,T]$, \begin{align*} & |x_{t,\theta}^i-x_{t,\theta}^{i,N}|^2 \leq \int_s^t 2 | x_{u,\theta}^i-x_{u\theta}^{i,N}| \\ & \biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{\sum_{j=1}^N b_0( x_{u,\theta}^{i,N},x_{u,\theta}^{j,N}) }{N}\biggl|du \\ & \leq\int_s^t2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|\\ &\times\biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^j)}{N} \biggl|du \\& \hspace{-25pt}+\int_s^t\frac{2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|}{N} \biggl|\sum_{j=1}^N[ b_0( x_{u,\theta}^{i},x_{u,\theta}^j)-b_0( x_{u,\theta}^{i},x_{u,\theta}^{j,N})]\biggl|du \\& \hspace{-25pt} +\int_s^t\frac{2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|}{N} \biggl| \sum_{j=1}^N [b_0( x_{u,\theta}^{i},x_{u,\theta}^{j,N}) -b_0( x_{u,\theta}^{i,N},x_{u,\theta}^{j,N}) ]\biggl|du \\& \hspace{-20pt} \leq \int_s^t\biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^j) \biggl|^2du + \\& \hspace{-15pt}\int_s^t[1+3Lip(b_0)]|x_{u,\theta}^i-x_{u,\theta}^{i,N}|^2du+\int_s^t\frac{Lip(b_0)}{N}\sum_{j=1}^N|x_{u,\theta}^j-x_{u,\theta}^{j,N}|^2du. \end{align*} Recall that the initial distribution among $N$ players is permutation invariant, $b_0$ is bounded, $x_{\cdot,\theta}^{i}$'s are now i.i.d., \begin{align*} & \mathbb{E} |x_{t,\theta}^{i }-x_{t,\theta}^{i}|^2 \leq [1+ 4Lip(b_0)] \mathbb{E} \int_s^t | x_{u,\theta}^{i }-x_{u,\theta}^{i,N}|^2 du\\ & \hspace{-10pt}+ \mathbb{E} \int_s^t \biggl| \int_\mathbb{R} b_0(x_{u,\theta}^{i },y) \mu_{u,\theta} (dy)-\frac{\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^{j})}{N} \biggl|^2du. \end{align*} $$\begin{aligned}\hspace{-20pt}\mathbb{E} \biggl| \int_\mathbb{R} b_0(x_{t,\theta}^{i },y) \mu_{t,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{t,\theta}^{i},x_{t,\theta}^{j}) \biggl|^2=\epsilon_N^2\end{aligned},$$ with $\epsilon_N^2=O\biggl(\frac{1}{N}\biggl).$ Consequently, \begin{align*} \mathbb{E} | x_{t,\theta}^{i }-x_{t,\theta}^{i,N} |^2 \leq \mathbb{E} \int_s^t \biggl\{&\biggl[1+4Lip(b_0)\biggl] \biggl | x_{u,\theta}^{i}-x_{u,\theta}^{i,N} \biggl|^2 + \epsilon_N^2\biggl\} du. \end{align*} By Gronwall's inequality, \begin{align*} & \mathbb{E} | x_{t,\theta}^{i }-x_{t,\theta}^{i,N} |^2 \leq \epsilon_N^2\cdot T\cdot\exp\biggl\{T[1+4Lip(b_0)]\biggl\}. \end{align*} Therefore, $ \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i}-x_{t,\theta}^{i,N}|^2 = O\biggl( \frac{1}{N} \biggl)$. \end{proof} Suppose that the first player chooses a different control $\xi_t' $ which is of a bounded velocity and all other players $i=2,3,\ldots, N$ choose to stay with the optimal control $\{\xi_{t,\theta}\}$. Denote $$ d\xi_t' = \dot{\xi}_t' dt = \varphi'(t,x) dt,\,d\xi_{t,\theta} = \dot{\xi}_{t,\theta} dt = \varphi_\theta (t,x|\{\mu_{t,\theta}\} ) dt. $$ Then the corresponding dynamics for the MFG is \begin{align*} d \tilde{x}_{t,\theta}^1 &= \biggl( b (\tilde{x}_{t,\theta}^1,\mu_{t,\theta}) + \varphi'(t,\tilde{x}_{t,\theta}^1) \biggl) dt + \sigma dW_t^1 \end{align*} The corresponding dynamics for $N$-player game are \begin{align*} d\tilde{x}_{t,\theta}^{1,N} &= \biggl( \frac{1}{N}\sum_{j=1}^N b_0( \tilde{x}_{t,\theta}^{1,N} ,\tilde{x}_{t,\theta}^{j,N}) + \varphi'(t,\tilde{x}_{t,\theta}^{1,N})\biggl) dt + \sigma dW_t^1, \\d\tilde{x}_{t,\theta}^{i,N} &= \biggl( \frac{1}{N}\sum_{j=1}^N b (\tilde{x}_{t,\theta}^{i,N},\tilde{x}_{t,\theta}^{j,N}) + \varphi_\theta (t,\tilde{x}_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} )\biggl) dt + \sigma dW_t^i, \quad \quad \quad 2 \leq i \leq N. \end{align*} We first show \begin{lemma}\label{Lemma-Nash} $ \sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{0\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}| \leq O \biggl(\frac{1}{\sqrt{N}} \biggl) $. \end{lemma} \begin{proof} For any $2 \leq i \leq N$, $d(x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N})= $ \begin{align*} & \biggl[ \frac{1}{N}\sum_{j=1}^N \biggl( b_0(x_{t,\theta}^{i,N},x_{t,\theta}^{j,N}) -b_0(\tilde{x}_{t,\theta}^{i,N},\tilde{x}_{t,\theta}^{j,N}) \biggl) \biggl.\\ &+ \biggl. \varphi_\theta (t,x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) -\varphi_\theta(t,\tilde{x}_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} )\biggl] dt. \end{align*} Because $\varphi_\theta (t,x|\{\mu_{t,\theta}\} )$ is nonincreasing in $x$, \begin{align*} &\hspace{-30pt} |x_{T,\theta}^{i,N}-\tilde{x}_{T,\theta}^{i,N}|^2 \leq \int_s^T 2 (x_{t,\theta}^{i,N}- \tilde{x}_{t,\theta}^{i,N})\\ &\times \frac{\sum_{j=1}^N \biggl(b_0(x_{t,\theta}^{i,N},x_{t,\theta}^{j,N})-b_0(\tilde{x}_{t,\theta}^{i,N},\tilde{x}_{t,\theta}^{j,N}) \biggl)}{N} dt\\ &\hspace{-45pt}\leq \int_s^T \frac{2 (x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N})}{N}\sum_{j=1}^N Lip(b_0) [ |x_{t,\theta}^{i,N} - \tilde{x}_{t,\theta}^{i,N}|+ |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}| ] dt \\ &\hspace{-45pt}\leq 2 Lip(b_0) \int_s^T |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 + \frac{|x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|}{N}\sum_{j=1}^N |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}| dt \\ &\hspace{-45pt}\leq 2 Lip(b_0) \int_s^T |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 \\ &+ \frac{1}{2N}\sum_{j=1}^N \biggl( |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2+ |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}|^2 \biggl)dt \\ &\hspace{-45pt}\leq Lip(b_0) \int_s^T 3 |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 + \frac{\sum_{j=1}^N |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}|^2}{N} dt , \end{align*} and \begin{align*} & \hspace{-30pt}\sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 \leq Lip(b_0) \int_s^T \biggl[ \frac{4N-1}{N} \\ &\hspace{-30pt} \times\sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t' \leq t} \limits |x_{t',\theta}^{i,N}-\tilde{x}_{t',\theta}^{i,N}|^2 + \frac{1}{N} \mathbb{E}|x_{t,\theta}^{1,N}-\tilde{x}_{t,\theta}^{1,N}|^2 \biggl] dt. \end{align*} By Gronwall's inequality, $\sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 =O \biggl( \frac{1}{N} \biggl)$, and so $\sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|=O \biggl(\frac{1}{\sqrt{N}} \biggl). $ \end{proof} \paragraph{Proof of Main Theorem a).} By Lemma \ref{Nash1}, for any $2 \le i \le N$, $ \sup_{s\leq t\leq T} \limits \mathbb{E} |x_{t,\theta}^{i }-x_{t,\theta}^{i,N} | = O \biggl(\frac{1}{\sqrt{N}} \biggl)$, and by the triangle inequality, $\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i}-\tilde{x}_{t,\theta}^{i,N}| = O(\frac{1}{\sqrt{N}})$. Therefore, $ \sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-\tilde{x}_{t,\theta}^{i, N}| + \sup_{1 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-x_{t,\theta}^{ i,N}| = O \biggl(\frac{1}{\sqrt{N}} \biggl).$ Finally, define \begin{align*} d\bar{x}_{t,\theta}^{1,N} &= \biggl( \frac{1}{N}\sum_{j=1}^N b_0( \bar{x}_{t,\theta}^{1,N} ,x_{t,\theta}^{j}) + \varphi'(t,\bar{x}_{t,\theta}^{1,N})\biggl) dt + \sigma dW_t^1, \end{align*} Since $(x-y)(\varphi'(t,x)-\varphi'(t,y)) \leq 0$ by Assumption (A6), then a similar proof as that for Lemma~\ref{Nash1} shows $\mathbb{E} \sup_{0\leq t\leq T} \limits |\tilde{x}_{t,\theta}^{1,N}-\bar{x}_{t,\theta}^{1,N}| = O \biggl(\frac{1}{\sqrt{N}} \biggl)$ and $ \mathbb{E} \sup_{0\leq t\leq T} \limits |\bar{x}_{t,\theta}^{1,N}-\tilde{x}_{t,\theta}^{1 }| = O\biggl( \frac{1}{\sqrt{N}} \biggl)$. Therefore, \begin{align*} &E_{x_{s-,\theta}^{N}}\biggl[J^{1,N}_{\theta}(s,x_{s-,\theta}^{N},\xi_\cdot^{'+},\xi_\cdot^{'-};\xi_{\cdot,\theta}^{-1}|\{\mu_{t,\theta}\} ) \biggl] \\&\geq \mathbb{E} \biggl[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(\tilde{x}_{t,\theta}^{1,N},x_{t,\theta}^{j }) +\gamma_1 \varphi'_1(t,\tilde{x}_{t,\theta}^{1,N})\biggl. \\ &\hspace{10pt} +\gamma_2 \varphi'_2(t,\tilde{x}_{t,\theta}^{1,N}) dt \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl) \\&\geq \mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f_0 (\tilde{x}_{t,\theta}^{1},y) \mu_{t,\theta} (dy) + \gamma_1 \varphi'_1(t,\tilde{x}_{t,\theta}^{1}) \biggl.\\ &\hspace{10pt}+ \gamma_2 \varphi'_2(t,\tilde{x}_{t,\theta}^{1}) dt \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl) \\&\geq \mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f_0 (x_{t,\theta}^{1} ,y) \mu_{t,\theta} (dy) + \gamma_1 \varphi_{1,\theta} (t,{x}_{t,\theta}^{1}|\{\mu_{t,\theta}\} )\biggl.\\ &\hspace{10pt} +\gamma_2 \varphi_{2,\theta} (t,{x}_{t,\theta}^{1}|\{\mu_{t,\theta}\} ) dt \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl) \\& = E_{x_{s-,\theta}^{N}}\biggl[J^{i,N}_{\theta}(s,x_{s-,\theta}^N,\xi_{\cdot,\theta}^{ +},\xi_{\cdot,\theta}^{ -};\xi_{\cdot,\theta}^{ -1}|\{\mu_{t,\theta}\} )\biggl] \\ & \hspace{10pt} -O\biggl(\frac{1}{\sqrt{N}} \biggl), \end{align*} where the last inequality follows the optimality of $\varphi$ for problem (\ref{MFGbounded1}), and the last equality follows a similar proof of Lemma \ref{Nash1}. $\square$ \paragraph{Proof of Main Theorem b).} Let all players except player 1 choose the optimal controls $(\xi_{\cdot,\theta}^+,\xi_{\cdot,\theta}^-) $, let player one choose any other controls $(\xi_{\cdot}^{'+},\xi_{\cdot}^{'-}) \in \mathcal{U}$. Denote \begin{align*} &d \xi_t'= d \varphi ' (t,x )= d \varphi_1' (t,x )-d \varphi_2' (t,x ),\\ & d\tilde{x}_{t}^1 = b (\tilde{x}_{t}^1, \mu_{t,\theta} ) dt +d\varphi_1'(t,\tilde{x}_{t}^1) - d\varphi_2'(t,\tilde{x}_{t}^1 ) + \sigma dW_t^1 ,\,\tilde{x}_{s-}^1 = x,\\ & d\tilde{x}_{t}^{1,N} = \frac{1}{N} \sum_{ j = 1,\ldots, N} b_0(\tilde{x}_{t}^{1,N}, \tilde{x}_{t,}^{j,N} ) dt +d\varphi_1'(t,\tilde{x}_{t}^{1,N} ) \\ & -d \varphi_2'(t,\tilde{x}_{t}^{1,N}) + \sigma dW_t^1, \ \ \tilde{x}_{s-}^{1,N} = x, \end{align*} and for $ i = 2,\ldots,N$, $x_{s-}^{i,N} = x$, $d\tilde{x}_{t}^{i,N} =$ \begin{align*} & \biggl( \frac{\sum_{ j = 1,\ldots, N} b_0(\tilde{x}_{t}^{i,N}, \tilde{x}_{t}^{j,N} ) }{N} +\varphi_{1,\theta}(t,\tilde{x}_{t}^{i,N}|\{\mu_{t,\theta}\} ) \\ & - \varphi_{2,\theta}(t,\tilde{x}_{t}^{i,N} |\{\mu_{t,\theta}\} ) \biggl) dt + \sigma dW_t^i. \end{align*} Then, \begin{align*} & \hspace{-30pt} d(x_{t,\theta}^{i,N}-\tilde{x}_{t}^{i,N}) = \biggl[ \frac{1}{N}\sum_{j=1}^N \biggl( b_0(x_{t,\theta}^{i,N},x_{t,\theta}^{j,N})-b_0(\tilde{x}_{t}^{i,N},\tilde{x}_{t}^{j,N}) \biggl) \biggl.\\ &\hspace{10pt} + \varphi_\theta(t,x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) -\varphi_\theta (t,\tilde{x}_{t}^{i,N}|\{\mu_{t,\theta}\} )\biggl] dt. \end{align*} By definition, $\varphi_\theta (t,x|\{\mu_{t,\theta}\} )$ is nonincreasing in $x$. Hence, a similar proof to the one for Lemma \ref{Lemma-Nash} yields \begin{equation} \label{Lemma-Nash2} \sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t}^{i,N}| = O \biggl(\frac{1}{\sqrt{N}} \biggl) \end{equation} From Lemma \ref{Nash1} and the triangle inequality, $\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i}-\tilde{x}_{t}^{i,N}| = O \biggl(\frac{1}{\sqrt{N}} \biggl)$. Therefore, $$\begin{aligned}&\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-\tilde{x}_{t}^{i, N}| \\ &\hspace{10pt}+ \sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-x_{t,\theta}^{ i,N}|= O \biggl(\frac{1}{\sqrt{N}} \biggl).\end{aligned}$$ Since $d\varphi'(t,x) $ is also nonincreasing in $x$, then again the same proof as that for Lemma~\ref{Nash1} shows $$\mathbb{E} \sup_{s\leq t\leq T} \limits | \tilde{x}^{ 1 ,N}_{t}-\tilde{x}_{t}^1| = O \biggl(\frac{1}{\sqrt{N}} \biggl).$$ By the Lipschitz continuity of $f,f_0$, \begin{align*} &E_{x_{s-}^N}\biggl[J^{1,N} (s,x_{s-}^N ,\xi_\cdot^{'+},\xi_\cdot^{'-};\xi_{\cdot,\theta}^{-1}|\{\mu_{t,\theta}\} ) \biggl] \\&\geq \mathbb{E} \biggl[ \int_s^T \frac{\sum_{j=1}^N f_0(\tilde{x}_{t}^{1, N} , x_{t,\theta}^{ j })}{N} dt+\gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1, N} ) \\ &\hspace{10pt} + \gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1, N} ) \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl)\\ &\geq \mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1, N} , y) \mu_{t,\theta}(dy) dt + \gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1, N}) \biggl.\\ &\hspace{10pt} +\gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1, N} ) \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl) \\&\geq \mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1 } , y) \mu_{t,\theta} (dy) dt + \gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1,N} ) \biggl.\\ & \hspace{10pt}+\gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl). \end{align*} By definitions of $\tilde{x}_{t }^{1 }$ and $\tilde{x}_{t}^{1,N }$, \begin{align*} & \mathbb{E} \biggl| d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) -d\varphi'_1(t,\tilde{x}_{t}^{1 } )-d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) +d\varphi'_2(t,\tilde{x}_{t}^{1 } ) \biggl| \\& \leq \mathbb{E} d | \tilde{x}_{t}^{1,N } -\tilde{x}_{t}^{1} | + dt\mathbb{E} \biggl| \frac{\sum_{ j = 1}^N b_0(\tilde{x}_{t}^{1,N } , \tilde{x}_{t}^{j,N } ) }{N} \\ &- b (\tilde{x}_t^{1 }, \mu_{t,\theta } ) \biggl| = O\biggl(\frac{1}{\sqrt{N}} \biggl), \end{align*} and by definition of $\varphi_1',\varphi_2'$, $$ \mathbb{E} \sup_{s\leq t\leq T} \biggl| d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) -d\varphi'_1(t,\tilde{x}_{t}^{1 } ) \biggl| = O\biggl(\frac{1}{\sqrt{N}} \biggl),$$ $$ \mathbb{E} \sup_{s\leq t\leq T}\biggl| - d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) +d\varphi'_2(t,\tilde{x}_{t}^{1 } )\biggl|= O\biggl(\frac{1}{\sqrt{N}} \biggl),$$ and \begin{align*} & \mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1 } , y) \mu_{t,\theta}(dy) dt + \gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) \biggl.\\ & \biggl. \ \ \ \ + \gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl) \\ & \geq \mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f(x_{t}^{1 } , y) \mu_{t,\theta } (dy) dt + \gamma_1 d\varphi_{1} (t,x_{t}^{1 } |\{\mu_{t,\theta}\} ) \biggl.\\ & \biggl. \ \ \ \ + \gamma_2 d\varphi_{2}(t,x_{t}^{1 } |\{\mu_{t,\theta}\} ) \biggl] -O\biggl(\frac{1}{\sqrt{N}} \biggl) \\ & = v (s,x |\{\mu_{t,\theta } \}) -O\biggl(\frac{1}{\sqrt{N}} \biggl). \end{align*} The last inequality is due to the optimality of $\varphi$. Now, by Theorem \ref{thetainfty}, $ \biggl|v_{\theta} (s,x |\{\mu_{t,\theta}\}) - v (s,x |\{\mu_{t,\theta} \}) \biggl| \le \epsilon_\theta$. Hence, by $ \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta }^{i} -x_{t,\theta}^{i,N } | = \epsilon_N $ and by the analysis as in the previous steps \begin{align*} &E_{x_{s-}^N}\biggl[J^{1,N} (s,x_{s-}^N ,\xi_\cdot^{'+},\xi_\cdot^{'-};\xi_{\cdot,\theta}^{-1}|\{\mu_{t,\theta}\} ) \biggl] \\ = & E_{x_{s-}^N}[v(s,x_{s-}^N| \{\mu_{t,\theta } \})] -\epsilon_N \\ \geq& E_{x_{s-}^N}[v_{\theta} ( s,x_{s-}^N|\{\mu_{t,\theta} \})] - (\epsilon_N+ \epsilon_\theta ) \\= &\mathbb{E} \biggl[ \int_s^T \int_\mathbb{R} f(x_{t,\theta}^{1 } , y) \mu_{t,\theta} (dy) dt + \gamma_1 d\varphi_{1,\theta}(t,x_{t,\theta}^{1 }|\{\mu_{t,\theta}\} ) \\ & + \gamma_2 d\varphi_{2,\theta } (t,x_{t,\theta}^{1 }|\{\mu_{t,\theta}\} ) \biggl] -(\epsilon_N+ \epsilon_\theta ) \\ \geq& \mathbb{E} \biggl[ \int_s^T \frac{1}{N} \sum_{j = 1}^N f_0(x_{t,\theta}^{1,N } , x_{t,\theta}^{j,N} ) dt +\gamma_1 d\varphi_{1,\theta} (t,x_{t,\theta}^{1,N } |\{\mu_{t,\theta}\} )\\ & + \gamma_2 d\varphi_{2,\theta}(t,x_{t,\theta}^{1,N } |\{\mu_{t,\theta}\} ) \biggl] -(\epsilon_N+ \epsilon_\theta ) \\ =&E_{x_{s-}^N}\biggl[J^{1,N} (s,x_{s-}^N ,\xi_{\cdot,\theta}^{ +},\xi_{\cdot,\theta}^{ -};\xi_{\cdot,\theta}^{ -1}|\{\mu_{t,\theta}\} )\biggl]-(\epsilon_N+ \epsilon_\theta ) . \end{align*} \ifCLASSOPTIONcaptionsoff \fi \end{document}
arXiv
Swiss Journal of Economics and Statistics A daily fever curve for the Swiss economy Marc Burri1 na1 & Daniel Kaufmann1,2 Swiss Journal of Economics and Statistics volume 156, Article number: 6 (2020) Cite this article Because macroeconomic data is published with a substantial delay, assessing the health of the economy during the rapidly evolving COVID-19 crisis is challenging. We develop a fever curve for the Swiss economy using publicly available daily financial market and news data. The indicator can be computed with a delay of 1 day. Moreover, it is highly correlated with macroeconomic data and survey indicators of Swiss economic activity. Therefore, it provides timely and reliable warning signals if the health of the economy takes a turn for the worse. Because macroeconomic data is published with a substantial delay, assessing the health of the economy during the rapidly evolving coronavirus disease of 2019 (COVID-19) crisis is challenging. Usually, policy makers and researchers rely on early information from surveys and financial markets to construct leading indicators and estimate forecasting models (see, e.g., Abberger et al., 2014; Galli, 2018; Kaufmann and Scheufele, 2017; OECD, 2010; Stuart, 2020; Wegmüller and Glocker, 2019, for Swiss applications). These indicators and forecasts are published with a delay of 1 to 2 months.Footnote 1 During the COVID-19 crisis, however, we need high-frequency information to assess how stricter or looser health restrictions and economic stimulus programs affect the economy. We propose a novel daily fever curve (f-curve) for the health of the Swiss economy based on publicly available financial market and news data. We construct risk premia on corporate bonds, term spreads, and stock market volatility indices starting in 2000. In addition, we collect short economic news from online newspaper archives. We then estimate a composite indicator which has the interpretation of a fever curve: As for monitoring the condition of a patient, an increase of the fever curve provides a reliable and timely warning signal if health takes a turn for the worse. Panel a of Fig. 1 shows the f-curve (on an inverted scale) jointly with real gross domestic product (GDP) growth: the indicator closely tracks economic crises. It presages the downturn during the Global Financial Crisis, responds to the removal of the minimum exchange rate and to the euro area debt crisis. The f-curve also responds strongly to the COVID-19 crisis (see panel b). The indicator starts to rise in late February. By then, it became evident that the COVID-19 crisis will hit most European countries; in Switzerland, the first large events were canceled. It reaches a peak shortly after the lockdown. Afterward, the fever curve gradually declines with news about economic stimulus packages and gradual loosening of the lockdown. The peak during the COVID-19 crisis is comparable with the Global Financial Crisis. But the speed of the downturn is considerably higher. In addition, so far, the crisis is less persistent. Up to June 4, 2020, the f-curve improved to 1/4 of its peak value during the lockdown. A fever curve for the Swiss economy. Panel a compares the fever curve (inverted and rescaled) to quarterly GDP growth. Panel b panel gives daily values of the fever curve along with important policy decisions The indicator has several advantages we hope will make it useful for policy makers and the public at large. The methodology of the f-curve is simple; the data selection process is based on economic theory and intuition; the data sources are publicly available, and we provide the program codes and daily updates on https://github.com/dankaufmann/f-curve/.Footnote 2 Moreover, additional daily indicators that track economic activity are easily integrated in the modeling framework. There are various initiatives in Switzerland and abroad to satisfy the demand for reliable high-frequency information during the COVID-19 crisis. Becerra et al. (2020) develop sentiment indicators using Internet search engine data. Brown and Fengler (2020) provide information on Swiss consumption behavior based on debit and credit card payment data. Eckert and Mikosch (2020) develops a daily mobility index using data on traffic, payments, and cash withdrawals. For the USA, economists at the Federal Reserve Bank of New York estimate a weekly index of economic activity based on retail sales, unemployment insurance claims, and other rapidly available data on production, prices, and employment (Lewis et al. 2020). Moreover, Buckman et al. (2020) create a daily news sentiment indicator that leads the US traditional consumer sentiment based on surveys. Our paper is the first, to the best of our knowledge, to combine daily information from newspapers and financial market data in a daily measure of economic activity for Switzerland. In what follows, we describe the data and methodology. Then, we provide an analysis of the in- and out-of-sample performance. The last section concludes. We use publicly available bond yields underlying the SIX Swiss Bond Indices Ⓡ (SIX 2020a). These data are available on a daily basis and with a delay of 1 day. Because many bond yields start only around 2007, we extend the series with a close match of government and corporate bond yields from the Swiss National Bank (see Table A.2 and Figure A.2 in the Online Appendix).Footnote 3 Then, we compute various spreads that should be correlated with economic activity: a government bond term spread (8Y - 2Y), the interest rate differential vis-à-vis the euro area (1Y), and risk premia of short- and long-term corporate debt. Besides interest rate spreads for Switzerland, we compute risk premia of foreign companies that issue debt in Swiss franc for short- and long-term debt. We also include term spreads for the USA and for the euro area. For the latter, we use short-term interest rates in euro (European Central Bank 2020) and long-term yields of German government debt (Deutsche Bundesbank 2020). In addition, we include two implied volatility measures of the Swiss and US stock market. Swiss data stem from SIX (2020b) and are published with a delay of one day. The US data stem from the Chicago Board Options Exchange (2020). These financial market data should be related to the Swiss business cycle. Stuart (2020) shows that the term spread exhibits a lead on the Swiss business cycle.Footnote 4Kaufmann (2020) argues that a narrowing of the interest rate differential appreciates the Swiss franc and thereby dampens economic activity. Risk premia are correlated with the default risk of companies, which should increase during economic crises. Finally, recent research documents an increase in uncertainty during economic downturns (Baker et al. 2016; Scotti 2016). There are various ways to measure uncertainty (see e.g., Dibiasi and Iselin 2016). Because we aim to exploit quickly and freely available financial market data, we prefer a measure of stock market volatility. We complement the financial market data with sentiment indicators based on Swiss newspapers. We extract headlines and lead texts from the online archives of the Tages-Anzeiger, the Neue Zürcher Zeitung, and the Finanz und Wirtschaft.Footnote 5 We focus on the headline and lead text as these are publicly available and often contain the key messages of the articles. To reduce the number of potentially relevant articles, and to decompose the sentiment indicator into a domestic and foreign part, we only use articles satisfying specific search queries (see Table A.3 in the Online Appendix for a detailed description). To calculate a news sentiment, we use the lexical methodology (see, e.g., Ardia et al. 2019; Shapiro et al. 2017; Thorsrud, 2020). First, we filter out irrelevant information.Footnote 6 Second, we identify positive and negative words using the lexicon developed by Remus et al. (2010). Finally, we calculate for each article n and each day t a sentiment score: $$S_{t,n} = \frac{\#P_{t,n} - \#N_{t,n}}{\#T_{t,n}} \ ,$$ where #Pt,n,#Nt,n,#Tt,n represent, for each article and each time period, the number of positive, negative, and total words, respectively. Finally, we compute a simple average over all articles to obtain daily indicators for articles about the domestic and foreign economy. News sentiment indicators receive more and more attention for forecasting economic activity. Buckman et al. (2020) show that during the COVID-19 pandemic, news sentiment indicators provide reliable and early information on the economy, even compared to quickly available survey data. Moreover, Ardia et al. (2019) show that news sentiment helps forecast the US industrial production growth. The financial market data and news indicators are quite volatile, but also they are correlated with each other. To parsimoniously summarize the information content of the data and remove idiosyncratic noise, we estimate a factor model in static form:Footnote 7 $$X = F\Lambda + e$$ The model comprises N variables and T daily observations. Therefore, the data matrix X is (T×N), the common factors F are (T×r), the factor loadings Λ are (r×N), and the unexplained error term e is (T×N). The advantage of a factor model is that we can parsimoniously summarize the information content in the large data matrix X with a relatively small number of common factors r. Assuming that the idiosyncratic components are only weakly serially and cross-sectionally correlated, we can estimate the factors and loadings by principal components (Bai and Ng 2013; Stock and Watson 2002).Footnote 8 Our main indicator is the first principal component of the static factor model. We normalize the indicator that it increases during crises.Footnote 9 Because this factor has no clear economic interpretation, we decompose it into a contribution from domestic and foreign fluctuations. Suppose that there are only two factors driving the variables. One factor captures foreign fluctuations. The other factor captures domestic fluctuations. We allow for spillovers from abroad to the domestic economy, but not vice versa. Under these assumptions, the factor model reads: $$\left[\begin{array}{cc} X & X^{*} \end{array}\right] = \left[\begin{array}{cc} f & f^{*}\end{array}\right] \left[\begin{array}{cc} \lambda_{11} & 0 \\ \lambda_{21} & \lambda_{22} \end{array}\right] + e $$ where X,X∗ denote the data matrices comprising domestic and foreign variables, respectively. In addition f,f∗ represent the domestic and foreign factors and λ11,λ21,λ22 are the loading matrices. To estimate this factor model, we can use an iterative procedure inspired by Boivin et al. (2009). First, we estimate the foreign factor only on foreign data. This imposes that foreign variables only load on the foreign factor. Second, we estimate the domestic factor on \(\tilde X\), where $$\tilde X = X - \lambda_{21}f^{*} \ ,$$ removes variation explained by the foreign factor. We can estimate λ21 for every indicator comprised in X in a regression on the domestic and foreign factor. Because this regression depends on the value of the domestic factor, we repeat this step 50 times (see Boivin et al., 2009; Kaufmann and Lein 2013, for more details). Finally, we can estimate a decomposition by regressing the f-curve on the domestic and foreign factors. This procedure does not guarantee that the decomposition adds up exactly to the overall factor. However, the unexplained rest turns out to be relatively small. The decomposition involves additional estimation steps that may reduce the forecast accuracy; therefore, we only use this decomposition for the in-sample interpretation, but not for out-of-sample forecasting. The f-curve should primarily be used to quickly detect turning points of the business cycle. As such, it is correlated or leading many key macroeconomic variables (see Figure A.4 in the Online Appendix). In its current form, we have not optimized the indicator to track any particular measure of economic activity. We therefore first focus on the in-sample information content of the f-curve, highlighting that it is available earlier than most other leading indicators. For the sake of illustration, however, we additionally provide an evaluation of its pseudo out-of-sample performance for forecasting real GDP growth. In-sample analysis To compare the in-sample information content of the f-curve to other leading indicators, we perform a cross-correlation test (see Neusser, 2016, Ch. 12.1).Footnote 10 Figure 2 shows a substantial correlation between the f-curve and many prominent leading indicators.Footnote 11 There is a coincident or leading relationship with the KOF Economic Barometer, SECO's Swiss Economic Confidence, the Organisation for Economic Co-operation and Development composite leading indicator (OECD CLI), and consumer confidence.Footnote 12 There is a coincident relationship with trendEcon's perceived economic situation. This daily indicator starts only in 2006, however. There is a significant lagging relationship with the SNB's Business Cycle Index. But this index is published with a relevant delay. Overall, these results suggest the f-curve provides sensible information comparable with other existing indicators. The key advantage of the f-curve is its prompt availability and that it is available on a longer time period. Cross-correlation with other indicators. Cross-correlation between the f-curve and other prominent leading and sentiment indicators. We aggregate all data either to quarterly frequency (consumer sentiment) or monthly frequency (remaining indicators). The dashed lines give 95% confidence intervals. A bar outside of the interval suggests a statistically significant correlation between the indicators at a lead/lag of s. Before computing the cross-correlation, the series have been pre-whitened with an AR(p) model (see Neusser 2016, Ch. 12.1). The lag order has been determined using the Bayesian information criterion. The only exception is the OECD CLI for which we used an AR(4) model Another advantage is that we can decompose its fluctuations into domestic and foreign factors. Panel a of Fig. 3 shows that the foreign contribution rises after the collapse of Lehman Brothers, but also, during the euro area debt crisis. By contrast, the domestic contribution rises after the removal of the minimum exchange rate in 2015, but also, during the COVID-19 crisis. Focusing on the COVID-19 crisis, panel b shows the indicator rose already in the last week of February, before the actual COVID-19 lockdown. It reaches a peak during the first week of the lockdown and gradually declines thereafter. About half of the increase in the indicator can be traced back to foreign developments. Although the domestic lockdown is important, the f-curve suggests the Swiss economy would have suffered even in the absence of these restrictions. During the last 4 weeks, the contribution from foreign variables declines. The domestic contribution, however, remains elevated. Therefore, while the negative foreign demand shock seems to become less important, the model suggests economic activity will remain subdued also due to domestic headwinds. Decomposition domestic and foreign variables. Decomposition of the f-curve into foreign factors, domestic factors, and an unexplained rest Pseudo out-of-sample evaluation How reliable is the f-curve? To answer this question, we perform a pseudo-real-time forecast evaluation. Therefore, we use the real-time data set for quarterly GDP vintages by Indergand and Leist (2014).Footnote 13 In the evaluation, we use the following direct forecasting model: $$y_{\tau+h} = \alpha_{h} + \beta_{h,1}f_{\tau|t} + \beta_{h,2}f_{\tau-1}+\nu_{\tau+h}$$ where yτ denotes quarterly GDP growth, h is the forecast horizon, τ gives time in quarterly frequency, and t denotes time in daily frequency. fτ|t is our best guess of the f-curve for the entire quarter based on daily information at time t. We compute fτ|t and fτ as the simple average of available daily observations for a given quarter. Finally, ντ+h is an error term. At the time of our last update, τ= 2020 Q2 and t= 4 June 2020. We then conduct a forecast based on the state of information when a new quarterly GDP vintage is published by SECO.Footnote 14 This yields 70 nowcasts (69 one-quarter-ahead forecasts). These forecasts are compared to three benchmarks. First, we compare the forecasts to the first quarterly release of GDP growth for the corresponding quarter. Because quarterly GDP is substantially revised ex-post, we treat the initial quarterly GDP release as a forecast of the true GDP figure. Second, we use an autoregressive model of order 1, AR(1), estimated on the corresponding real-time vintage for GDP growth. Third, using the same forecasting equation as for the f-curve, we forecast GDP growth using the KOF Economic Barometer, a prominent monthly composite leading indicator (Abberger et al. 2014). To compute the forecast errors, we use the last available release of quarterly GDP from June 3, 2020. Table 1 panel a shows the root-mean-squared error (RMSE) of the f-curve is higher than the one of the first official GDP release. However, the difference is not statistically significant. The advantage of the f-curve is, of course, that its value for the entire quarter is available about 2 months earlier than the first GDP release. In addition, we compare the f-curve to an AR(1) model. Panel b shows we outperform the AR(1) benchmark. The RMSE is 18% lower for the current quarter. Moreover, the difference in forecast accuracy is statistically significant. For the next quarter, however, the f-curve does not provide a more accurate forecast than the AR(1) model. Panel c shows that the f-curve yields similar results as the KOF Economic Barometer. The difference in the RMSE is never statistically significant. This suggest the advantage of our indicator primarily lies in its prompt availability. Table 1 Pseudo-real-time evaluation. Root-mean-squared errors (RMSE) for forecasts on days with a new quarterly GDP release. A lower RMSE implies higher predictive accuracy. h=0 (h=1) denotes the forecast for the current (next) quarter. We use three benchmarks. First, we use the first quarterly release of the corresponding quarter (panel a). Second, we use an AR(1) model (panel b). Third, we use the KOF Economic Barometer (panel c). The Diebold-Mariano-West (DMW) test provides a p value for the null hypothesis of equal predictive accuracy against the alternative written in the column header (Diebold and Mariano 2002; West 1996). We assume a quadratic loss function We perform a subsample analysis in Table 2. The current vintage of GDP, which we use to compute the forecast errors, will likely be revised in the future. One of the reasons is that future vintages will include annual GDP estimates by the SFSO, which are based on comprehensive firm surveys. Therefore, we restrict the sample to years where the GDP figures already include these annual figures (panel a). The f-curve performs better on this sample. In fact, the RMSE is almost identical to the RMSE of the first GDP release for the current quarter. A similar picture emerges when excluding economic crises (panel b). This implies that the f-curve does not only signal deep economic crises, but tracks the economy well also during normal times. Table 2 Subsample evaluation for real GDP growth: First release vs. f-curve. Root-mean-squared errors (RMSE) for forecasts on days with a new quarterly GDP release. A lower RMSE implies higher predictive accuracy. h=0 (h=1) denotes the forecast for the current (next) quarter. Panel (a) shows the evaluation for GDP figures that include the annual SFSO estimates (until 2018). Panel (b) excludes economic crises. As benchmark, we use the first quarterly release of the corresponding quarter. The Diebold-Mariano-West (DMW) test provides a p value for the null hypothesis of equal predictive accuracy against the alternative written in the column header (Diebold and Mariano 2002; West 1996). We assume a quadratic loss function Are the financial market or news data more important for the forecasting performance of the f-curve? Figure 4 shows two indicators only calculated with financial market and news data, respectively. Although the indicators are positively correlated, there are two key differences. First, the financial market data respond more strongly during crises. Second, the news data are more volatile.Footnote 15 This suggests the financial market data provide a more accurate signal of the business cycle than the news data. Table 3 confirms this view. The RMSE for an indicator based only on financial market variables amounts to 0.57, the same as for the overall f-curve. Meanwhile, the RMSE of a forecast based only on news data amounts to 0.64. The news data does not worsen the f-curve because the factor model including financial market data removes the idiosyncratic fluctuations; taken in isolation, however, the news indicator performs worse. Comparison news and financial market data. Two indicators estimated only on financial market and news data, respectively Table 3 Comparison news vs. financial data. Root-mean-squared errors (RMSE) for forecasts on days with a new quarterly GDP release. A lower RMSE implies higher predictive accuracy. h=0 (h=1) denotes the forecast for the current (next) quarter. Panel (a) shows the evaluation for an indicator based only on financial market data. Panel (b) shows the evaluation for an indicator based only on news data. As benchmark, we use the first quarterly GDP release for the corresponding quarter. The Diebold-Mariano-West (DMW) test provides a p value for the null hypothesis of equal predictive accuracy against the alternative written in the column header (Diebold and Mariano 2002; West 1996). We assume a quadratic loss function Although it is too early to judge the actual real-time performance of the indicator, Fig. 5 provides some preliminary results on the stability of the f-curve over time. One reason why the indicator is revised is that not all data series are available in real-time (ragged edge problem). Panel (a) shows results over the first month we updated the indicator on a daily basis. On average, more than 8 out of 12 series are available with a delay of 1 day. After 3 days, almost all indicators are available. Real-time results since initial version of the f-curve. Panel a: average number of observations available for calculation the f-curve (left figure). The different shades of gray represent estimates over time from May 11, 2020, to May 29, 2020 (right figure). Panel b: Estimates of the f-curve using the methodology in the Working Paper (v1.0) and the current version (v2.0) The main reason why the average lies below 12 is that the archive of Tages-Anzeiger has not been updated since 12 May 12, 2020.Footnote 16 Therefore, we augmented the indicator with information from this newspapers' online edition. Adding this source resulted in a slightly larger revision of the indicator compared to the Working Paper version (see panel b). However, the correlation between the old and new version is 0.99 and the broad picture during the COVID-19 crisis is identical. We develop a daily indicator of Swiss economic activity. A major strength of the indicator is that it can be updated with a delay of only 1 day. An evaluation of the indicator shows that it is not only correlated with other business cycle indicators but also accurately tracks Swiss GDP growth. Therefore, the f-curve provides an accurate and flexible framework to track Swiss economic activity at high frequency. Having said that, there is still room for improvement. We see six promising avenues for future research. First, the news sentiment indicators could exploit other publicly available news sources, in particular, newspapers from the French- and Italian-speaking parts of Switzerland. Second, we could use a topic modeling algorithm, instead of our own search queries, to classify news according to countries, sectors, and economic concepts (see e.g., Thorsrud, 2020). Third, the lexicon could be tailored specifically to economic news (see e.g., Shapiro et al., 2017). Fourth, we could examine the predictive ability of multiple factors and for other macroeconomic data. Fifth, the information could be used to disaggregate quarterly GDP and industrial production into monthly or even weekly series. Finally, it would be desirable to collect and exploit the information from many different daily indicators that are currently developed into one single composite indicator or indicator data set. Exploiting all this new information will likely further improve our understanding of health of the Swiss economy at high frequency. Data are available on https://github.com/dankaufmann/f-curve/. See Table A.1 in the Online Appendix for publication lags of some important macroeconomic data and leading indicators. We plan to continuously extend the indicator. We therefore welcome suggestions for improvements and extensions. Data from the Swiss National Bank are published with a longer delay. Therefore, these bond yields cannot be used to track the economy on a daily basis. We therefore move forward all term spreads by half a year. During the first month of daily updates, we noticed that the Tages-Anzeiger updates its archive with a relevant delay or not at all. Therefore, in the revised version of the indicator, we additionally include articles from the Tages-Anzeiger website. We remove HyperText Markup Language (HTML) tags, punctuation, numbers, and so-called stop words (e.g., the German words der, wie, ob). The stop words are provided by Feinerer and Hornik (2019). Also, we transform all letters to lowercase. The news indicators are much more volatile than the financial market data (see Figure A.1 in the Online Appendix). We therefore compute a one-sided 2-day moving average before including them in the factor model. To account for missing values, we compute the indicator only if at least five underlying data series are observed. Moreover, we remove all weekends. Then, we interpolate few additional missing values using an EM-algorithm (Stock and Watson 2002), after normalizing the data to have zero mean and unit variance. For interpolation, we choose a relatively large number of factors for interpolating the data (r=4). Finally, we estimate the f–curve as the first principal component of the interpolated data set. An interesting extension would be to examine whether more than one factor comprises relevant information for Swiss economic activity. We leave this extension for future research. It is noteworthy that other indicators are estimated or smoothed such that they undergo substantial revisions over time; moreover, some of the indicators are published with significant delays (see Table A.1 in the Online Appendix); finally, some are based on lagged data (see, e.g., OECD 2010). Figure A.3 in the Online Appendix provides plots of these indicators. All data sources are given in the Online Appendix. The evaluation is not strictly a real-time forecast evaluation because we use three types of in-sample information. First, the f-curve is constructed based on knowledge of the business cycle in the past, in particular, the Global Financial Crisis. Second, the link of the underlying indicators with new data is based on inspecting whether different data sources are highly correlated. Third, the normalization of the indicators in the factor model may introduce revisions that we do not account for in the forecast evaluation. Arguably, using this in-sample information in the evaluation makes sense if the goal of the evaluation is to show whether the indicator is useful going forward rather than whether the indicator would have been useful in the past. These dates stem from Indergand and Leist (2014). This is also because we smooth the news indicator with a moving-average of only 2 days. Comparable studies smooth over a longer time period. For example, Thorsrud (2020) uses a moving average of 60 days. On the one hand, this reduces the volatility of the news sentiment. On the other hand, this obviously renders the indicator less useful for detecting rapid daily changes. On rare occasions, the websites of other sources were not available. AR(p): Autoregressive model of order p CH: Composite leading indicator Coronavirus disease of 2019 Europe / f-curve: Fever curve FuW: GDP: HyperText Markup Language ILO: International Labor Organization KOF: Konjunkturforschungsstelle NZZ: OECD: RMSE: Root-mean-squared error SECO: State Secretariat for Economic Affairs SFSO: Swiss Federal Statistical Office SNB: TA: Abberger, K., Graff, M., Siliverstovs, B., Sturm, J.-E. (2014). The KOF Economic Barometer, Version 2014. A composite leading indicator for the Swiss business cycle. KOF Working Papers 353, Swiss Economic Institute, KOF, ETH Zurich. https://doi.org/10.3929/ethz-a-010102658. Ardia, D., Bluteau, K., Boudt, K. (2019). Questioning the news about economic growth: sparse forecasting using thousands of news-based sentiment values. International Journal of Forecasting, 35(4), 1370–1386. https://doi.org/10.1016/j.ijforecast.2018.10.010. Bai, J., & Ng, S. (2013). Principal components estimation and identification of static factors. Journal of Econometrics, 176(1), 18–29. https://doi.org/10.1016/j.jeconom.2013.03.007. Baker, S.R., Bloom, N., Davis, S.J. (2016). Measuring economic policy uncertainty. The Quarterly Journal of Economics, 131(4), 1593–1636. https://doi.org/10.1093/qje/qjw024. Becerra, A., Eichenauer, V.Z., Indergand, R., Legge, S., Martinez, I., Mühlebach, N., Oguz, F., Sax, C., Schuepbach, K., Thöni, S. (2020). trendEcon. https://www.trendecon.org. Accessed 30 Apr 2020. Boivin, J., Giannoni, M.P., Mihov, I. (2009). Sticky prices and monetary policy: evidence from disaggregated us data. American Economic Review, 99(1), 350–84. https://doi.org/10.1257/aer.99.1.350. Brown, M., & Fengler, M. (2020). Monitoring Consumption Switzerland. https://public.tableau.com/profile/monitoringconsumptionswitzerland. Accessed 03 May 2020. Buckman, S.R., Shapiro, A.H., Sudhof, M., Wilson, D.J. (2020). News sentiment in the time of COVID-19. FRBSF Economic Letter, 2020(08), 1–05. Accessed 13 May 2020. Chicago Board Options Exchange (2020). CBOE Volatility Index: VIX [VIXCLS]. https://fred.stlouisfed.org/series/VIXCLS. Accessed 30 Apr 2020. Deutsche Bundesbank (2020). Zeitreihe BBK01.WT1010: Rendite der jeweils jüngsten Bundesanleihe mit einer vereinbarten Laufzeit von 10 Jahren. https://www.bundesbank.de/dynamic/action/de/statistiken/zeitreihen-datenbanken/zeitreihen-datenbank/723452/723452?tsId=BBK01.WT1010. Accessed 30 Apr 2020. Dibiasi, A., & Iselin, D. (2016). Measuring uncertainty. KOF Bulletin 101, KOF Swiss Economic Institute, ETH Zurich. https://ethz.ch/content/dam/ethz/special-interest/dual/kof-dam/documents/KOF_Bulletin/kof_bulletin_2016_11_en.pdf . Accessed 30 Apr 2020. Diebold, F.X., & Mariano, R.S. (2002). Comparing predictive accuracy. Journal of Business & economic statistics, 20(1), 134–144. https://doi.org/10.1198/073500102753410444. Eckert, F., & Mikosch, H. (2020). A mobility indicator for Switzerland. KOF Bulletin 140, KOF Swiss Economic Institute. kof.ethz.ch/en/news-and-events/news/kof-bulletin/kof-bulletin/2020/05/ein-mobilitaetsindikator-fuer-die-schweiz.html . Accessed 14 May 2020. European Central Bank (2020). Yield curve spot rate, 1-year maturity - government bond, nominal, all issuers whose rating is triple. https://sdw.ecb.europa.eu/browseExplanation.do?node=qview&SERIES_KEY=165.YC.B.U2.EUR.4F.G_N_A.SV_C_YM.SR_1Y. Accessed 30 Apr 2020. Feinerer, I., & Hornik, K. (2019). Tm: Text Mining Package. https://CRAN.R-project.org/package=tm, R package version 0.7-7. Accessed 13 May 2020. Galli, A. (2018). Which indicators matter? Analyzing the Swiss business cycle using a large-scale mixed-frequency dynamic factor model. Journal of Business Cycle Research, 14(2), 179–218. https://doi.org/10.1007/s41549-018-0030-4. Indergand, R., & Leist, S. (2014). A real-time data set for Switzerland. Swiss Journal of Economics and Statistics, 150(IV), 331–352. https://doi.org/10.1007/BF03399410. Kaufmann, D. (2020). Wie weiter mit der Tiefzinspolitik? Szenarien und Alternativen. IRENE Policy Reports 20-01, IRENE Institute of Economic Research. https://ideas.repec.org/p/irn/polrep/20-01.html. Accessed 13 May 2020. Kaufmann, D., & Lein, S.M. (2013). Sticky prices or rational inattention – what can we learn from sectoral price data?. European Economic Review, 64, 384–394. https://doi.org/10.1016/j.euroecorev.2013.10.001. Kaufmann, D., & Scheufele, R. (2017). Business tendency surveys and macroeconomic fluctuations. International Journal of Forecasting, 33(4), 878–893. https://doi.org/10.1016/j.ijforecast.2017. Lewis, D.J., Mertens, K., Stock, J.H. (2020). Monitoring real activity in real time: the weekly economic index. Liberty Street Economics 30/03/2020, Federal Reserve Bank of New York. https://libertystreeteconomics.newyorkfed.org/2020/03/monitoring-real-activity-in-real-time-the-weekly-economic-index.html. Accessed 13 May 2020. Neusser, K. (2016). Time Series Econometrics, (pp. 207–214). Cham: Springer. https://doi.org/10.1007/978-3-319-32862-1_11. OECD (2010). Review of the CLI for 8 countries. OECD Composite Indicators. https://www.oecd.org/fr/sdd/indicateurs-avances/44556466.pdf. Accessed 13 May 2020. Remus, R., Quasthoff, U., Heyer, G. (2010). SentiWS - a publicly available German-language resource for sentiment analysis. In Proceedings of the 7th International Language Resources and Evaluation (LREC'10). European Language Resources Association (ELRA), (pp. 1168–71). Scotti, C. (2016). Surprise and uncertainty indexes: real-time aggregation of real-activity macro-surprises. Journal of Monetary Economics, 82(C), 1–19. https://doi.org/10.1016/j.jmoneco.2016.06.002. Shapiro, A.H., Sudhof, M., Wilson, D.J. (2017). Measuring news sentiment. Working Paper Series 2017-1, Federal Reserve Bank of San Francisco. https://doi.org/10.24148/wp2017-01. Accessed 13 May 2020. SIX (2020a). SBI®–Swiss Bond Indices. https://www.six-group.com/exchanges/indices/data_centre/bonds/sbi_en.html. Accessed 30 Apr 2020. SIX (2020b). VSMI®–Volatility Index on the SMI®. https://www.six-group.com/exchanges/indices/data_centre/strategy_indices/vsmi_en.html. Accessed 30 Apr 2020. Stock, J.H., & Watson, M.W. (2002). Macroeconomic forecasting using diffusion indexes. Journal of Business & Economic Statistics, 20(2), 147–162. https://doi.org/10.1198/073500102317351921. Stuart, R. (2020). The term structure, leading indicators, and recessions: evidence from Switzerland, 1974–2017. Swiss Journal of Economics and Statistics, 156(1), 1–17. https://doi.org/10.1186/s41937-019-0044-4. Thorsrud, L.A. (2020). Words are the new numbers: a newsy coincident index of the business cycle. Journal of Business & Economic Statistics, 38(2), 393–409. https://doi.org/10.1080/07350015.2018.1506344. Wegmüller, P., & Glocker, C. (2019). 30 Indikatoren auf einen Schlag. Die Volkswirtschaft, 11, 19–22. West, K. (1996). Asymptotic inference about predictive ability. Econometrica, 64(5), 1067–84. https://doi.org/10.2307/2171956. We thank an anonymous referee, Ronald Indergand, Alexander Rathke, and Jan-Egbert Sturm for helpful discussions. Marc Burri and Daniel Kaufmann contributed equally to this work. Institute of Economic Research, University of Neuchâtel, Rue A.-L. Breguet 2, Neuchâtel, 2000, Switzerland Marc Burri & Daniel Kaufmann KOF Swiss Economic Institute, ETH Zurich, Zurich, Switzerland Marc Burri Correspondence to Marc Burri. Additional file 1 The Online Appendix to this paper is available on https://www.dankaufmann.com/publications/. Replication files. Codes for replication of the main indicator are available on https://github.com/dankaufmann/f-curve/. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Burri, M., Kaufmann, D. A daily fever curve for the Swiss economy. Swiss J Economics Statistics 156, 6 (2020). https://doi.org/10.1186/s41937-020-00051-z SJES Special Focus on Covid-19
CommonCrawl
A comparison of analytic approaches for individual patient data meta-analyses with binary outcomes Doneal Thomas1, Robert Platt1 and Andrea Benedetti1, 2, 3Email author BMC Medical Research MethodologyBMC series – open, inclusive and trusted201717:28 Accepted: 2 February 2017 Individual patient data meta-analyses (IPD-MA) are often performed using a one-stage approach-- a form of generalized linear mixed model (GLMM) for binary outcomes. We compare (i) one-stage to two-stage approaches (ii) the performance of two estimation procedures (Penalized Quasi-likelihood-PQL and Adaptive Gaussian Hermite Quadrature-AGHQ) for GLMMs with binary outcomes within the one-stage approach and (iii) using stratified study-effect or random study-effects. We compare the different approaches via a simulation study, in terms of bias, mean-squared error (MSE), coverage and numerical convergence, of the pooled treatment effect (β 1) and between-study heterogeneity of the treatment effect (τ 1 2 ). We varied the prevalence of the outcome, sample size, number of studies and variances and correlation of the random effects. The two-stage and one-stage methods produced approximately unbiased β 1 estimates. PQL performed better than AGHQ for estimating τ 1 2 with respect to MSE, but performed comparably with AGHQ in estimating the bias of β 1 and of τ 1 2 . The random study-effects model outperformed the stratified study-effects model in small size MA. The one-stage approach is recommended over the two-stage method for small size MA. There was no meaningful difference between the PQL and AGHQ procedures. Though the random-intercept and stratified-intercept approaches can suffer from their underlining assumptions, fitting GLMM with a random-intercept are less prone to misfit and has good convergence rate. Individual patient data meta-analyses One- and two-stage models Generalized linear mixed models Penalized quasi-likelihood Adaptive gauss-hermite quadrature Fixed and random study-effects Individual Patient Data (IPD) meta-analyses (MA) are regarded as the gold standard in evidence synthesis and are increasingly being used in current practice [1, 2]. However, the implementation of the analysis of IPD-MA requires additional expertise and choices [3], particularly when the outcome is binary. These include (i) should a one- or two-stage model be used [4, 5], (ii) what estimation procedure should be used to estimate the one-stage model [6, 7] and, (iii) should the study effect be fixed or random [8]. Although IPD-MA were conventionally analyzed via a two-stage approach [9], over the last decade, use of the one-stage approach has increased [10]. Recently, some have suggested that the two-stage and one-stage frameworks produce similar results for MA of large randomized controlled trials [5]. The literature suggests the one-stage method is particularly preferable when few studies or few events are available as it uses a more exact statistical approach than relying on a normality approximation [3–5]. When IPD are available and the outcome is binary, the one-stage approach consists of estimating Generalized Linear Mixed Models (GLMMs) with a random slope for the exposure, to allow the exposure effect to vary across studies. Penalized quasi-likelihood (PQL) introduced by Breslow and Clayton is a popular method for estimating the parameters in GLMMs [11]. However, regression parameters can be badly biased for some GLMMs, especially with binary outcomes with few observations per cluster, low outcome rates, or high between cluster variability [12, 13]. Adaptive Gaussian Hermite quadrature (AGHQ) is the current favored competitor to PQL, which approximates the maximum likelihood by numerical integration [14]. Although estimation becomes more precise as the number of quadrature points increases, it often gives rise to computational difficulties for high-dimension random effects and convergence problems where variances are close to zero or cluster sizes are small [14]. The heterogeneity between studies is an important aspect to consider when carrying out IPD-MA. Such heterogeneity may arise due to differences in study design, treatment protocols or patient populations [8]. When such heterogeneity is present, the convention is to include a random slope in the model as it captures the variability of the exposure across studies. However, there are corresponding assumptions in regards to the study effect being modelled as stratified or random [4, 15]. Few comparisons of GLMMs have been reported in the context of IPD-MA with binary outcomes [4, 15], that is, when the number of studies and the number of subjects within each study is small, study sizes are imbalanced, in the presence of large between-study heterogeneity and small exposure effects and there is an interest in the variance parameter of the random treatment effect. According to previous literature, these factors have all been identified as influencing model performance [6]. While several simulation studies have been published, these have mainly limited their attention to simple models with only random intercepts [13, 16]. Thus, the performance of the random effects models including both a random intercept and a random slope are less well known. Our objective was to assess and compare via simulation studies, (i) one-stage approaches to conventional two-stage approaches (ii) the performance of different estimation procedures for GLMMs with binary outcomes, and (iii) using stratified study-effect or random study-effects in a randomized trial setting. We use our results to develop guidelines on the choice of methods for analyzing data from IPD-MA with binary outcomes and to understand explicitly the trade-offs between computational and statistical complexity. Methods section introduces the models we are considering, the design of the simulation study and the assessment criteria. In Results section, results for the different methods under varying conditions are presented and discussed. Discussion section concludes with a discussion. We conducted a simulation study to compare various analytic approaches to analyze data from IPD-MA with binary outcomes. Hereto, our methods all assume that between-study heterogeneity exists, as it is likely in practice, and so only random treatment-effects IPD meta-analysis models are considered. Data Generation The data generation algorithm was developed to generate two-level data sets (e.g. patients grouped into studies). We generated a binary outcome (Y ij ) and a single binary exposure (X ij ). We denote the number of studies j = 1, 2 …, K and i = 1, 2 …, n j denotes the individuals per study. Therefore, Y ij is the outcome observed for the i th individual from the j th study. The dichotomous exposure variable, X ij , was generated from a Bernoulli distribution with probability = 0.5 and recoded \( \pm {\scriptscriptstyle \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.} \) to indicate control/treatment group [15]. To generate the binary outcome variable Y ij , first the probability of the outcome was calculated from the random-study and –treatment effects logistic regression model (Eq. 1), or the stratified-study effects model (Eq. 2): $$ logit\left({\pi}_{ij}\right)=\left({\beta}_0+{b}_{0 j}\right)+\left({\beta}_1+{b}_{1 j}\right){x}_{ij} $$ $$ logit\left({\pi}_{ij}\right)={\beta}_j+\left({\beta}_1+{b}_{1 j}\right){x}_{ij} $$ Here π ij is the true probability of the outcome for the i th individual from the j th study, β 0denotes the mean log-odds of the outcome (study-effect) and β 1 the pooled treatment effect (log odds ratio). The random effects (b 0j and b 1j ) were generated from a bivariate normal distribution with mean = 0 and variance-covariance matrix \( \Sigma =\left(\begin{array}{cc}\hfill {\sigma}^2\hfill & \hfill \rho \sigma \tau \hfill \\ {}\hfill \rho \sigma \tau \hfill & \hfill {\tau}^2\hfill \end{array}\right) \) for the random study-effect case. In the stratified study effects case, (i.e. Eq. (2)), β j , were generated from a uniform distribution and b 1j was generated from a normal distribution with zero mean and variance, τ 2. A Bernoulli distribution with probability π ij from Eqs. (1) and (2) was used to generate the binary outcome Y ij . The number of studies, study size, total sample size, variances and correlation of the random effects, and average conditional probability were all varied, with levels described in Table 1. For each distinct combination (n = 480) of simulation parameters, 1000 IPD-MA were generated from each Eqs. (1) and (2), allowing us to investigate a wide range of scenarios. The heterogeneity was set at I2 = 0.01, 0.23 and 0.55 as defined by τ 2/(τ 2 + π 2/3) for a binary outcome using an odds ratio [17]. The levels correspond to: little or no, low and moderate heterogeneity respectively [18]. Summary of Simulation Parametersa IPD-Meta-analyses generated: M = 1000 (Number of studies, number of subjects per study, total average sample sizes)b: (K, n i , N) ∈ {(5,100,500), (15,33,500), (15,200,3000), (5,357,500), (15,98,500), (15,588,3000)} Fixed effects (intercepts): β 0 = − 0.85 Prevalence of the outcome π = 30% Fixed effects (Slopes): β 1 = 0.18 Random effects distribution: Random effects variances: {τ 0 2 , τ 1 2 } ∈ (0.05, 1, 4) Correlation between random effects: ρ ∈ (0,0.5) aIn a sensitivity analysis, we extended the number of studies to 50 with an average sample size of 9000 and reduced the prevalence of the outcome to 5%. The prevalence of the outcome was fixed to 30% by setting the value of the intercept β 0 to –0.85 bThe number of subjects per study was reported for only large studies when data sets were generated with imbalanced study sizes (bold text: 25% large studies-10 times more subjects) A sensitivity analysis was also considered to explore the performance of different methods when just 5% of observation had a positive outcome. Two-stage IPD methods In the two-stage approach, each study in the IPD was analyzed separately via logistic regression $$ {y}_i\sim Bernoulli\left({p}_i\right) $$ $$ logit\left({p}_{i j}\right)={\gamma}_0+{\gamma}_1{x}_i $$ The first step estimated the study-specific intercept and slope and their associated within-study covariance matrix (consisting of the variances of the intercept and slope, as well as the covariance) for each study. This model reduces the IPD to its relative treatment effect estimate and variance for each study then at the second stage these aggregate data (AD) are synthesized (described below). Model 1- Bivariate meta-analysis The AD were combined via a bivariate random-effects model that simultaneously synthesized the estimates whilst accounting for their correlation, and the within-study correlation [4]. The model assumes that the true effects follow a bivariate normal distribution and is estimated via restricted maximum likelihood with the following marginal distributions of the estimates [19]: $$ \left[\begin{array}{c}\hfill \widehat{\gamma_{0 J}}\hfill \\ {}\hfill \widehat{\gamma_{1 J}}\hfill \end{array}\right]\sim N\left(\left(\begin{array}{c}\hfill {\gamma}_0\hfill \\ {}\hfill {\gamma}_1\hfill \end{array}\right),\varSigma +{C}_j\right),\varSigma =\left(\begin{array}{c}\hfill {\tau}_0^2\hfill \\ {}\hfill {\tau}_{01}^2\hfill \end{array}\begin{array}{c}\hfill {\tau}_{01}^2\hfill \\ {}\hfill {\tau}_1^2\hfill \end{array}\right) $$ where Σ is the unknown between-study variance-covariance matrix of the true effects (γ 0 and γ 1) and C j (j = 1, …, K) the with-in study variance-covariance matrix with the variances of the estimates. Model 2: Conventional DerSimonian and Laird approach The with-in study and between-study covariance estimates are often times not estimated since most researchers assumed that studies are independent, and instead a univariate meta-analysis of the logit of the odds ratios is performed [20]. The marginal distribution of the pooled estimated treatment effect under this approach is easily obtained as: $$ \widehat{\gamma_{1 J}}\sim N\left({\gamma}_1,{\tau}_1^2+ var\left(\widehat{\gamma_J}\right)\right) $$ with unknown parameters γ 1 and τ 1 2 , estimated via the inverse variance weighted non-iterative method (method-of-moments) [21]. One-stage IPD methods The one-stage approach analyzes the IPD from all studies simultaneously, while accounting for clustering of subjects within studies [4]. The one-stage model is a form of GLMM. Two different specifications are considered. Model 3- Random intercept and random slope We estimated a GLMM with a random study effect u 0j and a random treatment effect u 1j via PQL and AGHQ, and allowed the random effects to be correlated, which implies that the between-study covariance between u 0j and u 1j is fully estimated. $$ \begin{array}{l} logit\left({p}_{ij}\right) = {\gamma}_0+{u}_{0 j}+\left({\gamma}_1+{u}_{1 j}\right){x}_{ij}\\ {}\left[\begin{array}{c}\hfill {u}_{0 j}\hfill \\ {}\hfill {u}_{1 j}\hfill \end{array}\right]\sim N\left(\left(\begin{array}{c}\hfill 0\hfill \\ {}\hfill 0\hfill \end{array}\right),{\Sigma}_j\right),\ {\Sigma}_j=\left(\begin{array}{cc}\hfill {\tau}_0^2\hfill & \hfill {\tau}_{01}^2\hfill \\ {}\hfill {\tau}_{01}^2\hfill & \hfill {\tau}_1^2\hfill \end{array}\right)\end{array} $$ Model 4-Stratified intercept one-stage Finally, the stratified one-stage approach estimates a separate intercept for each study rather than constraining the intercepts to follow a normal or other distribution. Therefore, there is no need for the normality assumption for the study membership, hence, the between-study covariance term is no longer estimated. The model is defined as follows: $$ logit\left({p}_{ij}\right) = {\displaystyle \sum_{k=1}^K}\left({\gamma}_k{I}_{k= j}\right)+\left({\gamma}_1+{u}_{1 j}\right){x}_{ij} $$ where I k = j indicates that a separate intercept should be estimated for each study j = 1, …, K and u 1j ~ N(0, τ 1 2 ). Parameters of both Models 3 and 4 were estimated via PQL and AGHQ. Estimation Procedures and Approximations The parameters of the one-stage models were estimated using PQL and AGHQ. For the two-stage approach, a logistic regression was first estimated for each study via maximum likelihood. The parameters of the two-stage model were estimated via method-of-moments (MOM) (Model 2) and restricted maximum likelihood (REML) (Model 1) [21–23] at the second stage. Both likelihood-based methods (PQL and AGHQ) were implemented on SAS version 9.4 using PROC GLIMMIX with default options [24]. The number of quadrature points in AGHQ was selected automatically [25], the absolute value for parameter convergence criterion was 10–8 and the maximum number of iterations was 100. Therefore, for each generated data set the following models were fit. Two-stage approach (Models 1 and 2) One-stage approach via GLMMs (Models 3 and 4) estimated with PQL. One-stage approach via GLMMs (Models 3 and 4) estimated with AGHQ. The performance of the estimation methods was evaluated using: a) numerical convergence, b) absolute bias; c) root mean square error (RMSE); and d) coverage probability - of the pooled treatment effect and its between-study variability. Numerical convergence The convergence rate was estimated for all models fit, as the number of simulation repetitions that did converge (without returning a warning message) divided by the total attempted (M = 1000). Models that returned a warning message specifying that the estimated variance-covariance matrix was not positive definite or that the optimality condition was violated were considered not to have converged. The Monte Carlo bias of the pooled treatment effect and its between-study heterogeneity is defined as the average of the bias in the estimates provided by each method as compared to the truth, across the 1000 IPD-MA in each scenario. The Monte Carlo estimate of the bias is computed as $$ bias=\frac{1}{1000}{\displaystyle \sum_{j=1}^{1000}}\ {\widehat{\theta}}_j-\theta, $$ where \( {\hat{\theta}}_j \) were the parameter estimates and θ was the true parameter of the pooled treatment effect or its between-study variance. We also reported the mean absolute bias (AB). Mean square error The mean square error (MSE) is a useful measure of the overall accuracy, because it penalizes an estimate for both bias and inefficiency. The Monte Carlo estimate of the MSE is: $$ M S E\left(\widehat{\theta}\right)=\frac{1}{1000}{\displaystyle \sum_{j=1}^{1000}}{\widehat{\Big(\theta}}_j-\theta \Big){}^2, $$ For each scenario, the RMSE of the pooled treatment effect and its between-study heterogeneity was reported, as this measure is on the same scale as the parameter. Coverage probability We estimated coverage for the pooled treatment effect and its between-study heterogeneity for the various methods. Gaussian coverage was estimated, where if \( \left|\hat{\theta}-\theta \right|\le 1.96\times S E\left(\hat{\theta}\right) \) the true value was covered, and if \( \left|\hat{\theta}-\theta \right|>1.96\times S E\left(\hat{\theta}\right) \) it was not. We reported the median, the 25th and 75th percentiles of the AB and RMSE of the pooled treatment effect and its between-study heterogeneity but reported percentages for the numerical convergence and coverage rate. Tables 2, 3, 4, 5, 6 and 7 present the median and interquartile range of the AB, RMSE, coverage and convergence of the pooled treatment effect and its between-study variance, respectively, as estimated via two- and one-stage; AGHQ and PQL; random-intercept and stratified-intercept methods. We reported results for data generated with imbalances in study sizes (different sample size in all studies) for both the random-intercept and stratified-intercept data generation (Eqs. 1 and 2) with correlated random effects (ρ = 0.5), as this scenario is likely the closest to real-life. Performance of the one- and two-stage approaches in small data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb Performance measuresc Random-study and treatment effect (Eq. 1) Stratified-study effect (Eq. 2) Two-staged One-stage Two-stage (τ 0 2 , τ 1 2 ) = (4, 4) e AB (β 1) 0.04 (0.02 0.06) 0.04 (0.02, 0.07) RMSE (β 1) Coverage (β 1) AB (τ 1 2 ) 0.23 (0.14,0.30) RMSE (τ 1 2 ) Coverage (τ 1 2 ) f (τ 0 2 , τ 1 2 ) = (1, 1) 0.04 (0.02, 0.1) Coverage (τ 1 2 ) aSmall data sets had 15 studies and on average 500 total subjects bBold text represent "best value" of performance cMedian (25th and 75th percentile) were reported for AB and RMSE, the proportion was reported for coverage and convergence dTwo-stage method via conventional DerSimonian and Laird (Model 2). One-stage (Random-intercept and random treatment effect with PQL (Model 3) e(τ 0 2 , τ 1 2 ): (Random treatment-effect variance, random study-effect variance) fThe two-stage approach did not return a confidence interval for τ 1 2 , hence no coverage was estimated and comparison was not applicable (NA) to the one-stage method Performance of the one- and two-stage approaches in large data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb (τ 0 2 , τ 1 2 ) = (1, 1) aLarge data sets had 15 studies and on average 3000 total subjects Performance of Penalized Quasi-likelihood and Adaptive Gaussian Hermite Quadrature estimation approaches in small data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb AGHQd PQLd AGHQ PQL (τ 0 2 , τ 1 2 ) = (4, 4)e AB (\( \beta \) 1) RMSE (\( \beta \) 1) Coverage (\( \beta \) 1) dResults are given for Adaptive Gaussian Hermite Quadrature (AGHQ) and Penalized Quasi-likelihood (PQL) for the One-stage random-intercept and random treatment effect model (Model 3) Performance of Penalized Quasi-likelihood and Adaptive Gaussian Hermite Quadrature estimation approaches in large data setsa with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsb Performance of the stratified- and random-intercepta models approaches in small data setsb with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsc Performance measuresd Random-study and -treatment effect (Eq. 1) Stratified-intercept Random-intercept 4..75 (2.23,7.64) aResults are given for Penalized Quasi-likelihood (PQL) for the One-stage random-intercept and random treatment effect model (Model 3) and the stratified-intercept and random-slope model (Model 4) bSmall data sets had 15 studies and on average 500 total subjects cBold text represent "best value" of performance dMedian (25th and 75th percentile) were reported for AB and RMSE, the proportion was reported for coverage and convergence Performance of the stratified- and random-intercepta models approaches in large data setsb with greater (Top panel) and lesser (Bottom panel) heterogeneity of random effectsc bLarge data sets had 15 studies and on average 3000 total subjects We did not exclude results from meta-analyses that returned a warning message (imperfect convergence). These meta-analyses were included as non-convergence and although these models failed to produce proper parameter estimates, these estimates were included in the calculation of the bias and the MSE. One- versus Two-stage In Tables 2 and 3, results for the absolute bias (AB) of the estimates for the pooled treatment effect β 1 are given. Recalling that the true parameter value was 0.18, we see that the biases were identical and under 0.05 in the one-stage and the two-stage approaches in both small and large data sets. Results were very comparable when the outcome rate was reduced from 30 to 5% (Additional file 1: Table S1). For both the one- and the two-stage, results depended on the true τ 2, and the sample size. For the larger sample size, root mean square error (RMSE) in β 1 was generally slightly larger when the one-stage method was used than when the two-stage was used. The picture was similar across all heterogeneity levels (Tables 2 and 3) and when the outcome rate was reduced (Additional file 2: Table S3). Neither one-stage nor two-stage methods yielded coverage of β 1 close to nominal levels (Tables 2 and 3). Increasing sample size had a positive effect on percent coverage, and increasing the true heterogeneity made estimation more difficult, hence decreasing the coverage (Table 3). Absolute bias of the between-study heterogeneity, τ 1 2 was usually slightly lower when the one-stage approach was used than when the two-stage approach was (Tables 2 and 3), particularly, when the sample size was small (Table 2) and when greater amount of heterogeneity exist in the random effects (Bottom panel of Table 2). Regarding the effects of the simulation parameters, AB decreased when data was generated with equal study sizes and increased when the rate of occurrence was reduced (Additional file 3: Table S2). In these cases, the one-stage approach was most biased. The RMSE of τ 1 2 for the one-stage estimates was mostly smaller than the RMSE of the two-stage method estimates. For increased sample size or reduction in the level of heterogeneity in the random effects, RMSE of τ 1 2 decreased at least by a factor of three across both methods. While the RMSE of τ 1 2 was inflated when the outcome rate was reduced, the one-stage method continued to outperform that of the two-stage method (Additional file 4: Table S4). Convergence was not a problem for the two-stage approach while convergence of the one-stage method varied from 90 to 100% (Tables 2 and 3). AGHQ versus PQL One-stage models estimated via PQL and AGHQ methods often yielded similar AB in β 1. There was no observed difference in the AB (β 1) between the methods when the outcome rate was reduced (Additional file 1: Table S1). RMSE of β 1 were generally greater when AGHQ was used than when PQL was used (Tables 4 and 5). Decreasing sample size, increasing the variances of the random effects or reducing the event rate (Additional file 2: Table S3) made precise estimation more difficult, hence RMSE increased. When the true heterogeneity was large and total sample was small (Top panel of Table 4), AGHQ provided coverage for β 1closer to nominal levels than PQL, while both methods provided comparable coverage when the sample size was increased (Table 5). Note that across both methods, levels of coverage were higher as heterogeneity increased and similar coverage was observed when the outcome rate was reduced (Additional file 5: Table S5). AB in τ 1 2 , was very comparable but slightly lower when PQL was used rather than AGHQ (Tables 4 and 5). The AB decreased with increasing sample size, particularly, when PQL was used (Table 5). There was substantial bias in τ 1 2 estimates when the event rate was reduced (Additional file 3: Table S2). On account of a better overall performance of PQL with regards to AB, RMSE of τ 1 2 was generally lower with PQL than with AGHQ (Tables 4 and 5). RMSE decreased with decreased variability in the random effects, and with increased sample size. In addition, PQL-estimates continued to yield smaller RMSE than AGHQ-estimates when the outcome rate was reduced (Additional file 4: Table S4). We found important under-coverage of the estimates for τ 1 2 for both estimation methods, particularly when PQL was used (Tables 4 and 5). The percent coverage was usually fair for both estimation methods when sample size increased, but was poor when the outcome rate was reduced (Additional file 6: Table S6). Convergence occurred more often when AGHQ was used than when PQL was used (Tables 4 and 5). Convergence was problematic for PQL, particularly when true heterogeneity was low and sample size was small (Bottom panel of Table 4). Comparable convergence was seen when the event rate was reduced (Additional file 5: Table S5). Random- intercept versus stratified-intercept The results of the simulation studies, modeling the intercept as random or fixed (random slope was always considered) via PQL estimation are summarized in the Tables 6 and 7. The convergence was markedly low (14-97%) for the fixed intercept & random slope method (Tables 6 and 7). Convergence was only reasonable for the approach when the sample size was large and heterogeneity was small, whereas convergence was always greater than 80% for the random intercept and slope approach. In general, AB in β 1 was similar for both stratified-intercept (random-slope only) and random intercept & slope methods. Regarding the simulation parameters, sample size and variability of the random effects, were not influential in reducing the AB in β 1. The RMSE in β 1 was smaller when estimated via the random intercept and slope model than when only a random slope was fit (Tables 6 and 7). Increased sample size and level of heterogeneity in the random effect was most influential in determining coverage probability. Absolute bias in τ 1 2 was clearly comparable when fit with a random intercept & slope approach or a random slope only (Tables 6 and 7). For lower outcome rate, there was a trend towards less pronounced bias when a random slope only was fit (Additional file 3: Table S2). We observed lower RMSE of τ 1 2 when a random intercept was fit, especially when the true heterogeneity was large (Top panels of Tables 6 and 7). Comparable results were seen when both models were fit in large sample and the true heterogeneity was small (Bottom panel of Table 7)- also when outcome rate was reduced (Additional file 4: Table S4). We found significant under coverage of τ 1 2 when both models were fit, however, this was more severe when a random slope only model was fit (Tables 6 and 7). When the generated values of τ 0 2 or τ 1 2 were low (i.e. low variability in the random effects) and sample size was increased, we had less difficulty to estimate the coverage of τ 1 2 when both models were fit. The coverage probability continued to be an issue when the rate of occurrence was reduced (Additional file 6: Table S6). Our simulation results indicate that when the number of subjects per study is large, the one- and two-stage methods yield very similar results. Our results also confirm the finding of previous empirical studies [5, 26, 27] that in some cases, the one-stage and two-stage IPD-MA results coincide. However, we found discrepancies between these methods, with a slight preference towards the one-stage method when the number of subjects per study is small. In these situations, neither method produced accurate estimates for the between-study heterogeneity associated with the treatment-effect; however, the biases were larger for the two-stage approach. Furthermore, one-stage methods produced less biased and more precise estimates of the variance parameter and had slightly higher coverage probabilities, though these differences may be due to using the REML estimate of τ 1 2 instead of the der Simonian and Laird estimator used in the two-stage approach. Estimation of GLMMs with binary outcomes continues to pose challenges, with many methods producing biased regression coefficients and variance components [7]. AGHQ has been shown to overestimate the variance component with few clusters or few subjects [17]. On the contrary, PQL has been found to underestimate the variance component while the standard errors are overestimated [12]. In the context of IPD-MA, we found similar absolute bias of the PQL- and AGHQ-estimated pooled treatment effect, while the PQL-estimates of the between-study variance had greater precision when study sizes were small and random effects were correlated. This somewhat confirms previous results, which found that PQL suffers from large biases but performs better in terms of MSE than AGHQ [6]. Both estimation methods experienced difficulty in attaining nominal coverage of the between-study heterogeneity associated with the treatment effect in two situations: (i) when the number of studies included was small and/or (ii) the true variances of the random effects were small. We also found that convergence was not an important problem for AGHQ when meta-analyses included studies with less than 50 individuals per study. However, convergence was poor when the prevalence of the outcome was reduced to 5% and the true heterogeneity was close to zero. Stratification of the intercept in one-stage models avoids the need to estimate the random effect for the intercept and the correlation between the random effects. This approach may be preferable in situations not investigated in this work (e.g. when the distribution of the random effects is skewed). However, this approach suffered from marked convergence rates when fit to small data sets (15 studies and on average 500 subjects). We used simulation studies to compare various analytic strategies to analyze data arising from IPD-MA across a wide range of data generation scenarios but made some simplifications. We only considered binary outcomes, one dichotomous treatment variable, a two-level data structure, and no confounders. Moreover, we estimated GLMMs via PQL and AGHQ, but did not compare Bayesian or other estimation methods, which might be particularly useful in sparse scenarios [28]. We have made the assumption throughout that IPD were available. Certainly, the time and cost associated with collecting IPD are considerable. However, once such data is in hand, we have addressed several open questions relating to the best way to analyze it. We should also note that methods exist for combining IPD and aggregated data [7]. Further study is needed to investigate alternative confidence intervals (or coverage probability) for the between-study heterogeneity that can be used to remedy the under-coverage of Gaussian intervals. The normality-based intervals (coverage rate) we studied greatly underperformed in most scenarios because the constructions of the confidence interval are likely to be invalid. A further simplification that limits the generalizability of this work is that it is restricted to only two-arm trials. The extension to three or more arms would require careful consideration of more complicated correlation structures in treatment effects across arms and within studies [29]. One important comparison we have not addressed is, computational speed where the two-stage method had a distinct advantage over the one-stage; PQL was faster than AGHQ and the stratified-intercept model run-time was quicker than the random-intercept model. As far as we know, this simulation study is the first to simultaneously generate data with normally distributed and stratified random intercepts. This study also compares approaches that include a random intercept for study membership to those that do not. Furthermore, the use of simulation - to systematically investigate the robustness of the approaches to variation in sample size, study number, outcome rate, magnitude of correlation and variances. As a result, our scenarios have allowed us to assess performance without being too exhaustive. Guidelines for Best Practice On the basis of these findings, we can make several recommendations. When the IPD-MA included many studies and the outcome rate was not too low, this work supports the conclusion of a previous study [5] that the conventional two-stage method by DerSimonian and Laird [21] is a good choice under the data conditions simulated here. Cornell et al. found that the DL method produced too-narrow confidence bounds and p values that were too small when the number of studies was small or there was high between-study heterogeneity [30]. In such cases, a modification such as the Hartung-Knapp approach may be preferable [31]. Further, while the bivariate two-stage approach is very rarely used in practice, we found that it tended to yield good overall model performance, comparable with that of the one-stage models when study sizes are small. In addition, our results also suggest that the one-stage method can be used in IPD-MA where study sizes are less than 50 subjects per study or few events were recorded in most studies (outcome rate of 5%). In these cases, the one-stage approach is more appropriate as it models the exact binomial distribution of the data and offers more flexibility in model specification over the two-stage approach [32]. If interest lies in estimation of the pooled treatment effect or the between-study heterogeneity of the treatment effect, estimation using PQL appeared to be a better choice due to its lower bias and mean square error for the settings considered. On the contrary, computational issues such as convergence occurred less with this technique than with AGHQ. However, it is important to note that convergence and coverage in τ 2 was an issue in small and large total sample sizes and also, when level of true heterogeneity was large. For these simulated data, the results of both the random-intercept and stratified-intercept models were not importantly different. However, under both data generations, fitting a GLMM with the random-intercept was overall less sensitive to misspecification in small sample sizes with large between-study heterogeneity than the stratified-intercept GLMM since we have observed high rates of non-convergence via the stratified-intercept model. There are four important caveats to these recommendations. First, our simulations show greater accuracy of the pooled odds ratio as the number of studies increase. Therefore, an IPD-MA with more studies will provide more accurate estimates. Secondly, our results show that the estimation of the between-study heterogeneity of the treatment effect is highly biased regardless of the sample size and number of studies. Therefore, we should always expect that the variance parameter be estimated with some error. Thirdly, small overall samples mark the trade-off under which a meta-analyst might consistently choose precision over bias and our simulations show that PQL estimation may be preferred in these situations. Finally, large overall sample size can eliminate the lack of statistical power present in small overall samples. In such cases, comparable results are seen for one- and two-stage methods and fitting a two-stage analysis as a first step may be advisable. This could aid as a quick and efficient investigation of heterogeneity and treatment-outcome association. To summarize, the one- and two-stage methods consistently produced similar results when the number of studies and overall sample are large. Although the PQL and AGHQ estimation procedures produced similar bias of the pooled log odds ratios, PQL-estimates had lower RMSE than the AGHQ-estimates. Both the random-intercept and stratified-intercept models yielded precise and similar estimates for the pooled log odds ratios. However, the random-intercept models gave good coverage probabilities of the between-study heterogeneity in small sample sizes and yielded overall good convergence rate as compared to the random slope only model. Absolute bias AGHQ: Adaptive Gaussian hermite quadrature GLMM: Generalized linear mixed model IPD-MA: Individual patient data meta-analysis MOM: Method-of-moments MSE: Mean-squared error PQL: REML: Restricted maximum likelihood We have no acknowledgements. This work was supported by an operating grant from the Canadian Institutes of Health Research. Andrea Benedetti is supported by the FRQ-S. Data are available upon request. DT led this project in the study design, performed simulation of data and statistical analyses, and also led the writing of the manuscripts. AB participated in the study design, guided statistical analyses and edited the final draft. RP helped draft and revised the manuscript. All authors read and approved the final manuscript. Competing interest Not applicable. This article reports a simulation study and does not involve human participants. Additional file 1: Median (Interquartile range (IQR)) absolute bias (%) for treatment effect, β1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 72 kb) Additional file 2: Median (Interquartile range (IQR)) (%) root mean square error for treatment effect, β1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 70 kb) Additional file 3: Median (Interquartile range (IQR)) absolute bias (%) for random treatment-effect variance, τ2 1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 73 kb) Additional file 4: Median (Interquartile range (IQR)) (%) root mean square error for random treatment-effect variance, τ2 1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 63 kb) Additional file 5: Percent Coverage (percent convergence rate) for treatment effect, β1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 62 kb) Additional file 6: Percent Coverage for random treatment-effect variance, τ2 1 for different approach, by number of studies, total average sample size, mixture of studies sizes and degree of random effects variances - data generated from random study- and treatment effect: Eq. 1 with 5% outcome rate. (DOC 63 kb) Department of Epidemiology, Biostatistics & Occupational Health, McGill University, Montreal, Canada Department of Medicine, McGill University, Montreal, Canada Respiratory Epidemiology and Clinical Research Unit, McGill University Health Centre, Purvis Hall, 1020 Pine Avenue West, Montreal, QC, H3A 1A2, Canada Riley RD, Simmonds MC, Look MP. Evidence synthesis combining individual patient data and aggregate data: a systematic review identified current practice and possible methods. J Clin Epidemiol. 2007;60(5):431–9. doi:10.1016/j.jclinepi.2006.09.009. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar Stewart LA, Parmar MK. Meta-analysis of the literature or of individual patient data: is there a difference? Lancet. 1993;341(8842):418–22.View ArticlePubMedGoogle Scholar Debray T, Moons K, Valkenhoef G, et al. Get real in individual participant data (IPD) meta‐analysis: a review of the methodology. Res Synth Methods. 2015;6(4):293–309.Google Scholar Debray TPA, Moons KGM, Abo-Zaid GMA, et al. Individual participant data meta-analysis for a binary outcome: One-stage or Two-stage? PLoS ONE. 2013;8(4):e60650. doi:10.1371/journal.pone.0060650. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar Stewart GB, Altman DG, Askie LM, et al. Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice. PLoS ONE. 2012;7(10):e46042. doi:10.1371/journal.pone.0046042. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar Callens M, Croux C. Performance of likelihood-based estimation methods for multilevel binary regression models. J Stat Comput Simul. 2005;75(12):1003–17. doi:10.1080/00949650412331321070. [published Online First: Epub Date]|.View ArticleGoogle Scholar Capanu M, Gönen M, Begg CB. An assessment of estimation methods for generalized linear mixed models with binary outcomes. Stat Med. 2013;32(26):4550–66. doi:10.1002/sim.5866. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar Rondeau V, Michiels S, Liquet B, et al. Investigating trial and treatment heterogeneity in an individual patient data meta-analysis of survival data by means of the penalized maximum likelihood approach. Stat Med. 2008;27(11):1894–910. doi:10.1002/sim.3161. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar Simmonds MC, Higgins JP, Stewart LA, et al. Meta-analysis of individual patient data from randomized trials: a review of methods used in practice. Clinical trials (London, England). 2005;2(3):209–17.View ArticleGoogle Scholar Thomas D, Radji S, Benedetti A. Systematic review of methods for individual patient data meta-analysis with binary outcomes. BMC Med Res Methodol. 2014;14:79.View ArticlePubMedPubMed CentralGoogle Scholar Breslow NE, Clayton DG. Approximate inference in generalized linear mixed models. J Am Stat Assoc. 1993;88(421):9–25. doi:10.2307/2290687. [published Online First: Epub Date]|.Google Scholar Breslow NE, Lin X. Bias correction in generalised linear mixed models with a single component of dispersion. Biometrika. 1995;82(1):81–91. doi:10.2307/2337629. [published Online First: Epub Date]|.View ArticleGoogle Scholar Jang W, Lim J. A numerical study of PQL estimation biases in generalized linear mixed models under heterogeneity of random effects. Commun Stat Simul Comput. 2009;38(4):692–702. doi:10.1080/03610910802627055. [published Online First: Epub Date]|.View ArticleGoogle Scholar Pinheiro JC, Bates DM. Approximations to the Log-likelihood function in the nonlinear mixed-effects model. J Comput Graph Stat. 1995;4(1):12–35. doi:10.2307/1390625. [published Online First: Epub Date]|.Google Scholar Turner RM, Omar RZ, Yang M, et al. A multilevel model framework for meta-analysis of clinical trials with binary outcomes. Stat Med. 2000;19(24):3417–32.View ArticlePubMedGoogle Scholar Benedetti A, Platt R, Atherton J. Generalized linear mixed models for binary data: Are matching results from penalized quasi-likelihood and numerical integration less biased? PLoS ONE. 2014;9(1):e84601. doi:10.1371/journal.pone.0084601. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar Moineddin R, Matheson FI, Glazier RH. A simulation study of sample size for multilevel logistic regression models. BMC Med Res Methodol. 2007;7:34. doi:10.1186/1471-2288-7-34. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar Higgins JP, Thompson SG, Deeks JJ, et al. Measuring inconsistency in meta-analyses. BMJ (Clin Res Ed). 2003;327(7414):557–60. doi:10.1136/bmj.327.7414.557. [published Online First: Epub Date]|.View ArticleGoogle Scholar van Houwelingen HC, Arends LR, Stijnen T. Advanced methods in meta-analysis: multivariate approach and meta-regression. Stat Med. 2002;21(4):589–624. doi:10.1002/sim.1040. [published Online First: Epub Date].View ArticlePubMedGoogle Scholar Riley RD. Multivariate meta-analysis: the effect of ignoring within-study correlation. J R Stat Soc A Stat Soc. 2009;172(4):789–811. doi:10.1111/j.1467-985X.2008.00593.x. [published Online First: Epub Date]|.View ArticleGoogle Scholar DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177–88.View ArticlePubMedGoogle Scholar Chen H, Manning AK, Dupuis J. A method of moments estimator for random effect multivariate meta-analysis. Biometrics. 2012;68(4):1278–84. doi:10.1111/j.1541-0420.2012.01761.x. [published Online First: Epub Date]|.View ArticlePubMedPubMed CentralGoogle Scholar Hardy RJ, Thompson SG. A Likelihood approach to meta-analysis with random effects. Stat Med. 1996;15(6):619–29. doi:10.1002/(SICI)1097-0258(19960330)15:6<619::AID-SIM188>3.0.CO;2-A. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar Littell RC, Milliken GA, Stroup WW, Wolfinger DR. SAS system for mixed models. Cary: SAS Institute, Inc.; 1996.Google Scholar Proc Glimmix. Maximum Likelihood Estimation Based on Adaptive Quadrature, SAS Institute Inc., SAS 9.4 Help and Documentation. Cary: SAS Institute Inc; 2002–2004.Google Scholar Abo-Zaid G, Guo B, Deeks JJ, et al. Individual participant data meta-analyses should not ignore clustering. J Clin Epidemiol. 2013;66(8):865–73.e4. doi:10.1016/j.jclinepi.2012.12.017. [published Online First: Epub Date]. Mathew T, Nordström K. Comparison of One-step and Two-step meta-analysis models using individual patient data. Biom J. 2010;52(2):271–87. doi:10.1002/bimj.200900143. [published Online First: Epub Date]|.PubMedGoogle Scholar Gelman A, Hill J. Data analysis using regression and multilevel/hierarchical models. Cambridge: Cambridge University Press; 2007.Google Scholar Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004;23(20):3105–24. doi:10.1002/sim.1875. [published Online First: Epub Date]|.View ArticlePubMedGoogle Scholar Cornell JE, Mulrow CD, Localio R, Stack CB, Meibohm AR, Guallar E, et al. Random-effects meta-analysis of inconsistent effects: a time for change. Ann Intern Med. 2014;160(4):267–70.View ArticlePubMedGoogle Scholar IntHout J, Iaonnidis JPA, Borm GF. The the hartung-knapp-sidik-jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-laird method. BMC Med Res Methodol. 2014;14:25.View ArticlePubMedPubMed CentralGoogle Scholar Noh M, Lee Y. REML estimation for binary data in GLMMs. J Multivar Anal. 2007;98(5):896–915. http://dx.doi.org/10.1016/j.jmva.2006.11.009. [published Online First: Epub Date]. Data analysis, statistics and modelling
CommonCrawl
\begin{document} \baselineskip = 5mm \newcommand \ZZ {{\mathbb Z}} \newcommand \FF {{\mathbb F}} \newcommand \NN {{\mathbb N}} \newcommand \QQ {{\mathbb Q}} \newcommand \RR {{\mathbb R}} \newcommand \CC {{\mathbb C}} \newcommand \PR {{\mathbb P}} \newcommand \AF {{\mathbb A}} \newcommand \uno {{\mathbbm 1}} \newcommand \Le {{\mathbbm L}} \newcommand \bcA {{\mathscr A}} \newcommand \bcB {{\mathscr B}} \newcommand \bcC {{\mathscr C}} \newcommand \bcD {{\mathscr D}} \newcommand \bcE {{\mathscr E}} \newcommand \bcF {{\mathscr F}} \newcommand \bcG {{\mathscr G}} \newcommand \bcH {{\mathscr H}} \newcommand \bcM {{\mathscr M}} \newcommand \bcN {{\mathscr N}} \newcommand \bcI {{\mathscr I}} \newcommand \bcK {{\mathscr K}} \newcommand \bcL {{\mathscr L}} \newcommand \bcO {{\mathscr O}} \newcommand \bcP {{\mathscr P}} \newcommand \bcQ {{\mathscr Q}} \newcommand \bcR {{\mathscr R}} \newcommand \bcS {{\mathscr S}} \newcommand \bcT {{\mathscr T}} \newcommand \bcU {{\mathscr U}} \newcommand \bcV {{\mathscr V}} \newcommand \bcW {{\mathscr W}} \newcommand \bcX {{\mathscr X}} \newcommand \bcY {{\mathscr Y}} \newcommand \bcZ {{\mathscr Z}} \newcommand \Spec {{\rm {Spec}}} \newcommand \Pic {{\rm {Pic}}} \newcommand \Jac {{{J}}} \newcommand \Alb {{\rm {Alb}}} \newcommand \NS {{{NS}}} \newcommand \Corr {{Corr}} \newcommand \Sym {{\rm {Sym}}} \newcommand \Alt {{\rm {Alt}}} \newcommand \Prym {{\rm {Prym}}} \newcommand \cone {{\rm {cone}}} \newcommand \cha {{\rm {char}}} \newcommand \tr {{\rm {tr}}} \newcommand \alg {{\rm {alg}}} \newcommand \im {{\rm im}} \newcommand \Hom {{\rm Hom}} \newcommand \colim {{{\rm colim}\, }} \newcommand \End {{\rm {End}}} \newcommand \coker {{\rm {coker}}} \newcommand \id {{\rm {id}}} \newcommand \tor {{\rm {tor}}} \newcommand \spc {{\rm {sp}}} \newcommand \Ob {{\rm Ob}} \newcommand \Aut {{\rm Aut}} \newcommand \cor {{\rm {cor}}} \newcommand \res {{\rm {res}}} \newcommand \Gal {{\rm {Gal}}} \newcommand \PGL {{\rm {PGL}}} \newcommand \Gr {{\rm {Gr}}} \newcommand \Bl {{\rm {Bl}}} \newcommand \Sing {{\rm {Sing}}} \newcommand \spn {{\rm {span}}} \newcommand \Nm {{\rm {Nm}}} \newcommand \inv {{\rm {inv}}} \newcommand \codim {{\rm {codim}}} \newcommand \ptr {{\pi _2^{\rm tr}}} \newcommand \gom {{\mathfrak m}} \newcommand \goT {{\mathfrak T}} \newcommand \goC {{\mathfrak C}} \newcommand \goD {{\mathfrak D}} \newcommand \goM {{\mathfrak M}} \newcommand \goS {{\mathfrak S}} \newcommand \goH {{\mathfrak H}} \newcommand \sg {{\Sigma }} \newcommand \CHM {{\mathscr C\! \mathscr M}} \newcommand \DM {{\sf DM}} \newcommand \FS {{FS}} \newcommand \MM {{\mathscr M\! \mathscr M}} \newcommand \HS {{\mathscr H\! \mathscr S}} \newcommand \MHS {{\mathscr M\! \mathscr H\! \mathscr S}} \newcommand \Vect {{\mathscr V\! ect}} \newcommand \Gm {{{\mathbb G}_{\rm m}}} \newcommand \trdeg {{\rm {tr.deg}}} \newcommand \znak {{\natural }} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{example}[theorem]{Example} \newtheorem{question}[theorem]{Question} \newtheorem{assumption}[theorem]{Assumption} \newtheorem{fact}[theorem]{Fact} \newtheorem{crucialquestion}[theorem]{Crucial Question} \newcommand \lra {\longrightarrow} \newcommand \hra {\hookrightarrow} \def\color{blue}{\color{blue}} \def\color{red}{\color{red}} \def\color{green}{\color{green}} \newenvironment{pf}{\par\noindent{\em Proof}.}{ \framebox(6,6) \par } \title[Algebraic cycles on quadric sections of cubics in $\PR ^4$] {\bf Algebraic cycles on quadric sections of cubics in $\PR ^4$ under the action of symplectomorphisms} \author{V. Guletski\u \i , A. Tikhomirov} \date{24 March 2014} \begin{abstract} \noindent Let $\tau $ be the involution changing the sign of two coordinates in $\PR ^4$. We prove that $\tau $ induces the identity action on the second Chow group of the intersection of a $\tau $-invariant cubic with a $\tau $-invariant quadric hypersurfaces in $\PR ^4$. Let $l_{\tau }$ and $\Pi _{\tau }$ be the $1$- and $2$-dimensional components of the fixed locus of the involution $\tau $. We describe the generalized Prymian associated to the projection of a $\tau $-invariant cubic $\bcC \subset \PR ^4$ from $l_{\tau }$ onto $\Pi _{\tau }$ in terms of the Prymians $\bcP _2$ and $\bcP _3$ associated to the double covers of two irreducible components, of degree $2$ and $3$ respectively, of the reducible discriminant curve. This gives a precise description of the induced action of the involution $\tau $ on the continuous part in the Chow group $CH^2(\bcC )$. The action on the subgroup corresponding to $\bcP _3$ is the identity, and the action on the subgroup corresponding to $\bcP _2$ is the multiplication by $-1$. \end{abstract} \subjclass[2010]{14C15, 14C25, 14J28, 14J70, 14J30} \keywords{algebraic cycles, $K3$-surfaces, symplectomorphism, mixed Hodge structures, weigh filtration, cubic threefolds, intermediate Jacobian, Beauville's pair, generalized Prym variety} \maketitle \section{Introduction} \label{s-intro} The aim of this paper is to have yet another look at the correlation between $0$-cycles on algebraic surfaces, on one side, and codimension $2$ algebraic cycles of their $3$-dimensional spreads, on the other. For that purpose we have chosen a fairly concrete model, the intersections of quadric and cubic hypersurfaces invariant under the involution $\tau $ changing the sign of two homogeneous coordinates in $\PR ^4$. Such intersections are $K3$-surfaces, and $\tau $ induces symplectomorphic actions on their second cohomology groups. We will study the induced actions on the second Chow groups of $\tau $-invariant cubics and their $\tau $-invariant quadric sections in $\PR ^4$. The primary motivation comes from the Bloch-Beilinson conjecture on mixed motives, \cite{Jannsen}, which implies that the induced action of a symplectomorphism of a $K3$-surface on its Chow group of $0$-cycles must be the identity, \cite{Huybrechts}. The first results along this line had been obtained in \cite{Sur les 0-cycles}, where such identity action was proved for quartics in $\PR ^3$ and intersections of $3$ quadrics in $\PR ^5$, covering the cases $d=2$ and $d=4$ in terms of \cite{GeemenSarti}. In \cite{Pedrini} the identity action on the second Chow group was proved for the case $d=1$, and for $K3$-surfaces admitting elliptic pencils with sections. In the first part of the paper we slightly generalize the method developed in \cite{Sur les 0-cycles} and apply it to prove the identity action of the involution $\tau $ on the second Chow group for intersections of invariant cubics and quadrics in $\PR ^4$ (Theorem \ref{identity}). It should be noted that when the second version of our manuscript was published on the web, we learnt about the paper \cite{HuybrechtsKemeny}, where the identity action of symplectic involutions on the second Chow group was proved in one third of the moduli, and soon after C. Voisin proved the same for all symplectic involutions on $K3$-surfaces, \cite{VoisinNewPaper}. In \cite{Huybrechts2} Huybrechts showed that these results suffice to prove the identity action of symplectomorphisms of any finite order. In the second part we deal with the induced action of the involution $\tau $ on the second Chow group of a $\tau $-invariant cubic hypersurface $\bcC $ in $\PR ^4$. Our approach is based on the following geometric idea. The set of fixed points of the involution $\tau $ is a union of a line $l_{\tau }$ and a plane $\Pi _{\tau }$ in $\PR ^4$. Projecting $\bcC $ from $l_{\tau }$ onto the plane $\Pi _{\tau }$ we observe that the corresponding discriminant curve splits into a conic $C_2$ and a cubic $C_3$ in $\Pi _{\tau }$. Following \cite{Shokurov}, we construct an isogeny from the generalized Prymian $\bcP $ associated to the double cover of the whole discriminant curve $C_2\cup C_3$ onto the direct product of two Prymians $\bcP _2$ and $\bcP _3$, corresponding to the double covers of $C_2$ and $C_3$ respectively. An enjoyable thing here is that the involution $\tau $ induces the identity action on $\bcP _3$, and it is the multiplication by $-1$ on $\bcP _2$. This gives a complete description of the induced action of the involution $\tau $ on $\bcP $, on the continuous part $A^2(\bcC )$ in the second Chow group of the cubic $\bcC $, as well as on the Hodge pieces of its third cohomology group $H^3(\bcC ,\CC )$. In particular, the action on the subgroup in $A^2(\bcC )$ corresponding to $\bcP _3$ is the identity, and the action on the subgroup corresponding to $\bcP _2$ is the multiplication by $-1$ (Theorem \ref{cubic action}). Pulling back algebraic cycles to the generic fibre of a pencil of quadrics sections we will see that those algebraic cycles which correspond to $\bcP _2$ vanish in the Chow group of the generic fibre. In a sense, this gives a geometrical reflection, in terms of split discriminant curves and corresponding Prymians, of the behaviour of codimension $2$ algebraic cycles predicted by the Bloch-Beilinson conjectures. {\sc Acknowledgements.} The authors are grateful to Sergey Gorchinskiy for useful suggestions and Claire Voisin for explanation of certain phenomena in algebraic cycles and encouraging interest to this work. The research was partially supported by the EPSRC grant EP/I034017/1. The second named author has been financially supported by the Ministry of Education and Science of Russian Federation, and he also acknowledges the hospitality of the Max Planck Institute for Mathematics in Bonn during the winter 2014. \section{Notation and terminology} \label{terminology} Throughout the paper we work over $\CC $. By default, $H^*(-,A)$ are the Betti cohomology with coefficients $A=\ZZ $, $\QQ $ or $\CC $. The Chow groups will be with coefficients in $\ZZ $. For any quasi-projective variety $X$ over $\CC $, and for any positive integer $q$, let $CH_q(X)$ be the Chow group of dimension $q$ algebraic cycles modulo rational equivalence on $X$. If $X$ is equidimensional of dimension $d$ then the group $CH_q(X)$ will be often denoted by $CH^p(X)$, where $p=d-q$. Our main object of study will be the subgroups $A^p(X)\subset CH^p(X)$ generated by cycles algebraically equivalent to zero on $X$. Let $X$ be a nonsingular projective complex variety and let $CH^p(X)_{\hom }$ be the kernel of the cycle class homomorphism $cl:CH^p(X)\to H^{2p}(X,\ZZ )$. Each group $H^i(X,\ZZ )$ carries a pure weight $i$ Hodge structure on it. Let $F^p$ be the corresponding decreasing Hodge filtration on the complexified vector space $H^i(X,\CC )$, and let $H^{p,q}(X)$ be the adjoint quotient $(F^{p}/F^{p+1})H^{p+q}(X,\CC )$. The filtration $F^p$ is opposite to the complex conjugate filtration $\bar F^p$, in the sense that $F^p\oplus \overline{F^{q+1}}=H^{p+q}$. Let $$ J^p(X)=H^{2p-1}(X,\CC )/(\im (H^{2p-1}(X,\ZZ ))+F^pH^{2p-1}(X,\CC )) $$ be the $p$-th intermediate Jacobian of $X$. Here $\im (H^{2p-1}(X,\ZZ))$ is the image of the natural homomorphism from integral to complex cohomology, i.e. the group $H^{2p-1}(X,\ZZ )$ modulo the torsion subgroup in it. The Poincar\'e duality gives the isomorphism $$ J^p(X)\simeq (F^{d-p+1}H^{2d-2p+1}(X,\CC ))^{\vee }/ \im (H_{2d-2p+1}(X,\ZZ ))\; , $$ where $d$ is the dimension of $X$ and $\im (H_{2d-2p+1}(X,\ZZ ))$ is the image of the integral homology in the dual space of the $d-p+1$-th term of the Hodge filtration via integration of forms over topological chains. Integrating over $2d-2p+1$-dimensional topological chains whose boundaries are homologically trivial $d-p$-cycles gives the Abel-Jacobi homomorphism $AJ:CH^p(X)_{\hom }\to J^p(X)$. For any nonsingular projective complex variety $Y$ and a cycle $Z$ of codimension $p$ on $Y\times X$ flat over $Y$, we can fix a closed point $y_0$ on $Y$ and define a map from $Y$ to $J^p(X)$ by sending a closed point $y\in Y$ to the image of the algebraically trivial cycle $Z_y-Z_{y_0}$ under the Abel-Jacobi homomorphism $AJ$. This extends to the homomorphism $A_0(Y)\to J^p(X)$, where $A_0(Y)$ is the subgroup generated by algebraically trivial $0$-cycles in $CH_0(Y)$. The image of this homomorphism is a complex subtorus $T_{Y,Z}$ in $J^p(X)$ whose tangent space at $0$ is contained in $H^{p-1,p}(X)$. Let $J_{\alg }^p(X)$ be the maximal subtorus of $J^p(X)$ having this property, so that $J_{\alg }^p(X)$ contains $T_{Y,Z}$ for all possible $Y$ and $Z$. Then $J_{\alg }^p(X)$ is an abelian variety over $\CC $, which is a functor in $X$. The homomorphism $AJ$ sends $A^p(X)$ into $J_{\alg}^p(X)$, so that we obtain the Abel-Jacobi homomorphism $AJ:A^p(X)\to J_{\alg}^p(X)$ on algebraic parts. Notice that the latter homomorphism is expected to be surjective. This is not known in general, but will be the case in the concrete applications below. All the details on intermediate Jacobians and Abel-Jacobi homomorphisms can be found, for example, in Chapter 12 of \cite{Voisin Book 1}. Throughout the paper, for an abelian group $A$ and a finite group $G$ acting on $A$ let $N:A\to A$ be the integral averaging operator sending $a\in A$ to $\Sigma _{g\in G}g(a)$, let $A^G$ be the subgroup of $G$-invariant elements in $A$, and let $A^{\sharp }$ be the kernel of the endomorphism $N$. If the group $A$ is divisible and torsion free, then it is a direct sum of $A^G$ and $A^{\sharp }$. Let $\znak $ stay for $G$ or $\sharp $ simultaneously. Assume now that a finite group $G$ acts by regular automorphisms on $X$. For any $g\in G$ let $g^*$ be the induced automorphism of $H^p(X,A)$, where $A$ is $\ZZ $, $\QQ $ or $\CC $. Each $g^*$ preserves the degrees of differential forms. This is why it preserves $H^{p,q}(X)$ and so the Hodge filtration. Moreover, $g^*$ is compatible with the integration of forms, which gives the automorphism $g^*$ of the complex torus $J^p(X)$ induced by the above automorphisms of complex and integral cohomology groups. By the same reason, $g^*$ is compatible with the corresponding automorphism of $CH^p(X)_{\hom }$ via the Abel-Jacobi homomorphism $AJ$. Since $J_{\alg }^p(X)$ is a functor of $X$, the automorphism $g^*$ gives the automorphism of the abelian variety $J_{\alg }^p(X)$. The norm $N=\sum _{g\in G}g^*$ is an endomorphism of $J_{\alg }^p(X)$ as an abelian variety over $\CC $. Note that $J_{\alg }^p(X)^{\znak }$ is an abelian subvariety in $J^p_{\alg }(X)$ and one has the Abel-Jacobi homomorphisms $$ AJ^{\znak } : A^p(X)^{\znak }\lra J_{\alg}^p(X)^{\znak } $$ for $\znak =G$ and $\znak =\sharp $, which will play an important role in what follows. Next, let $d=2$ and let $\NS (X)$ be the N\'eron-Severy group of the surface $X$. The space $\NS (X)\otimes \QQ $ can be identified with the image $H^2(X,\QQ )_{\alg }$ of the Chow $\QQ $-vector space $CH^1(X)\otimes \QQ $ under the cycle class map to $H^2(X,\QQ )$. The $\cup \; $-product on $H^2(X,\QQ )$ is non-degenerate by the Poincar\'e duality theorem, and it remains non-degenerate after the restriction to $H^2(X,\QQ )_{\alg }$ by the Hodge index theorem. This is why we can consider the orthogonal complement $H^2(X,\QQ )_{\tr }$ to $H^2(X,\QQ )_{\alg }$ with respect to the intersection pairing on $H^2(X,\QQ )$. The group $H^2(X,\QQ )$ is called algebraic if $H^2(X,\QQ )_{\tr }$ is trivial. This is equivalent to say that $p_g=0$, where $p_g=\dim H^{2,0}(X)$ is the geometric genus of the surface $X$. The action of $G$ is compatible with the complex conjugation on the Dolbeault cohomology. This implies that $H^{2,0}(X)^G=0$ if and only if $H^{0,2}(X)^G=0$, and $H^{2,0}(X)^{\sharp }=0$ if and only if $H^{0,2}(X)^{\sharp }=0$. Thus, if $H^{2,0}(X)^G=0$ then $H^2(X,\CC )^G=H^{1,1}(X)^G$, whence the $\QQ $-vector space $H^2(X,\QQ )^G$ is algebraic in the sense that any cohomology class in $H^2(X,\QQ )^G$ comes from the class in $\NS (X)^G\otimes \QQ $ via the cycle class map. Similarly, if $H^{2,0}(X)^{\sharp }=0$ then $H^2(X,\CC )^{\sharp }=H^{1,1}(X)^{\sharp }$, so that the $\QQ $-vector space $H^2(X,\QQ )^{\sharp }$ is algebraic, i.e. a cohomology class in $H^2(X,\QQ )^{\sharp }$ comes from a certain class in the Neron-Severi group $\NS (X)^{\sharp }\otimes \QQ $ via the cycle class map. In the second half of the paper we will be mainly interested in the case when $G$ is a group of order $2$ generated by an involution $\tau $ acting on a nonsingular projective variety $X$. In particular, if $X$ is a $K3$-surface over $\CC $, $\omega \in H^{2,0}(X)$ a symplectic form on $X$ and $\tau ^*(\omega )=\omega $, then we say that $\tau $ is a symplectomorphism of order $2$ or a Nikulin involution on $X$. \section{Voisin's theorem} \label{voisin} Let $\bcX $ be a nonsingular projective threefold and let $f:\bcX \dasharrow \PR ^1$ be a pencil of surfaces on $\bcX $ with base locus $B$. We will assume that $f$ is nice in the sense that $B$ is nonsingular and that it is the irreducible transversal intersection of two generic members of the pencil. Let $\tilde \bcX \to \bcX $ be the blow up of $\bcX $ at $B$ giving the regular map $\tilde f: \tilde \bcX \to \PR ^1$. The short exact sequence for Chow groups under blowup, \cite[Section 6.7]{Fulton}, yields the isomorphism $A^2(\tilde \bcX )\simeq A^2(\bcX )\oplus A^1(B)$. The Jacobian $\Jac (B)$ of the curve $B$ is a direct summand of the intermediate Jacobian $J^2(\tilde \bcX )$, see \cite{ClemensGriffiths} or \cite{Tjurin}. As taking algebraic parts in intermediate Jacobians is functorial, we obtain the regular surjective morphism of abelian varieties $\epsilon :J_{\alg }^2(\tilde \bcX )\to \Jac (B)$. Assume that a finite group $G$ acts on $\bcX $ fibre-wise. Then $G$ acts also on $B$, and $\epsilon $ induces the regular epimorphism $\epsilon ^{\znak }:J_{\alg }^2(\tilde \bcX )^{\znak }\to \Jac (B)^{\znak }$ which will be used later. For any $t\in \PR ^1$ let $X_t$ be the fibre of the regular morphism $\tilde f$. We impose the following assumption on the pencil $f:\bcX \dasharrow \PR ^1$. \begin{itemize} \item[(A)]{} {\it $\exists $ a Zariski open $U\subset \PR ^1$, such that $X_t$ is nonsingular, $H^2(X_t,\QQ )^{\znak }$ is algebraic and $H^1(X_t,\QQ )^{\znak }=0$ for any $t\in U$.} \end{itemize} For any $t\in \PR ^1$ let $j_t:B\to X_t$ be the closed embedding of the base locus into the fibre. Since $B$ is a Cartier divisor in $X_t$, the embedding $j_t$ induces the Gysin homomorphism $j_t^*:CH^1(X_t)\to CH^1(B)$, see \cite[Section 2.6]{Fulton}. \begin{theorem} \label{Voisin's theorem} Under Assumption (A), the group $A^1(B)^{\znak }$ is contained in the image of the natural homomorphism $\oplus _{t\in \PR ^1}CH^1(X_t)^{\znak }\to CH^1(B)^{\znak }$ induced by the homomorphisms $j_t^*$. \end{theorem} \begin{pf} Let $U$ be as in (A), let $\eta $ be the generic and $\bar \eta $ the geometric generic point of $\PR ^1$. Consider the relative Picard scheme $\bcP \to U$ of the pull-back $\tilde f_U:\tilde \bcX _U\to U$ of $\tilde f$ to $U$. Its fibre $\bcP _{\bar \eta }$ is the Picard scheme of the fibre $X_{\bar \eta }$ and, by Assumption (A), $\bcP _{\bar \eta }^{\znak }$ is isomorphic to the N\'eron-Severi group $\NS (X_{\bar \eta })^{\znak }$. The group $G$ acts in the fibres of the structural morphism from $\bcP $ onto $U$. Since $\NS (X_{\bar \eta })$ is finitely generated, we choose a finite number of points $P_1,\dots ,P_n$ in $\bcP _{\bar \eta }^{\znak }$ generating the group $\NS (X_{\bar \eta })^{\znak }$. For each $P_i$ let $W_i\to U$ be a finite morphism onto the curve $U$, such that the curve $W_i$ is nonsingular and the residue field of the scheme $\bcP _{\eta }$ at $P_i$ is $\CC (W_i)$. Let $W^{\znak }$ be the union of the curves $W_i$ with the structural morphism $g^{\znak }:W^{\znak }\to U$. Then, for any $t\in U$, the fibre $W^{\znak }_t$ generates the $\QQ $-vector space $\NS (X_t)_{\QQ }^{\znak }$, which in turn is isomorphic to $H^2(X_t,\QQ )^{\znak }$ by the assumption imposed on $f$. As $U$ is locally contractible, the stalk of $R^0g^{\znak }_*\QQ $ at $t\in U$ is $H^0(W^{\znak }_t,\QQ )$ and the stalk of $R^2\tilde f_*\QQ $ at $t$ is $H^2(X_t,\QQ )$. Shrinking $U$ if necessary we can assume that $g^{\znak }$ and $f_U$ are smooth. Trivializing these morphisms in complex topology one can define the sheaf $(R^2(\tilde f_U)_*\, \QQ )^{\znak }$ and the surjective morphism $$ \alpha : R^0g^{\znak }_*\QQ \lra (R^2(\tilde f_U)_*\, \QQ )^{\znak } $$ of sheaves on $U$, such that for any $t\in U$ and $P\in W_t^{\znak }$ the local homomorphism $\alpha _t$ sends $P_i$ to the corresponding element in $\NS (X_t)$. Since $U$ is homotopy equivalent to a $1$-dimensional $CW$-complex, it has cohomological dimension $1$, so that the induced homomorphism $$ \alpha _* : H^1(U,R^0g^{\znak }_*\QQ )\to H^1(U,(R^2(\tilde f_U)_*\, \QQ )^{\znak }) $$ is surjective. Let $\bcY _i$ be the fibred product $W_i\times _U\tilde \bcX _U$ and let $\bcS _i\to W_i$ be the Picard scheme of the relative scheme $\bcY _i\to W_i$. Each point $P_i$ is rational over the field $\CC (W_i)$ bringing a section of the morphism $\bcS _i\to W_i$ over some possibly smaller Zariski open subset $V_i$ in $W_i$. This section in turn induces the section of the morphism $\bcY _i\times _{W_i}\bcS \to \bcY _i$ over $(\bcY _i)_{V_i}$. Let $\bcD _i$ be the pull-back of the Poincar\'e divisor on the scheme $\bcY _i\times _{W_i}\bcS _i$ to $(\bcY _i)_{V_i}$, and let $\bcD $ be the union of $\bcD _i$'s. Using appropriate nonsingular compactifications $\bar W_i$ and considering their union $\bar W^{\znak }$ we can also work with the morphism $\bar g^{\znak }:\bar W^{\znak }\to \PR ^1$, such that $g^{\znak }$ is the pull-back of $\bar g^{\znak }$ under the embedding of $U$ into $\PR ^1$. Let $\bar \bcD _i$ be the closure of $\bcD _i$ in the fibred product $\bar W^{\znak }\times \tilde \bcX $ over $\Spec (\CC )$ and $\bar \bcD $ be the union of $\bar \bcD _i$'s. Then $\bar \bcD $ is an algebraic cycle of codimension $2$ in the fourfold $\bar W^{\znak }\times \tilde \bcX $. Let $$ \bar \beta = cl(\bar \bcD )\in H^4(\bar W^{\znak }\times \tilde \bcX ,\ZZ ) $$ be the cohomology class of $\bar \bcD $. Passing to rational coefficients in cohomology groups, the $(1,3)$-K\"unneth component $\bar \beta (1,3)$ of $\bar \beta $ induces the homomorphism of Hodge structures $$ \bar \beta (1,3)_* : H^1(\bar W^{\znak },\QQ )\to H^3(\tilde \bcX ,\QQ )\; . $$ Let $\bar \beta (1,3)_{i,*}$ be its restriction on $H^1(\bar W^{\znak }_i,\ZZ )$ and let $J_i$ be the Jacobian of the curve $\bar W_i$. Then $\bar \beta (1,3)_{i,*}$ induces the regular morphism $$ \bar \beta (1,3)_{i,*} : J_i\lra J^2(\tilde \bcX )\; , $$ which factorizes through $J_{\alg }^2(\tilde \bcX )$. For any two points $P$ and $P_0$ on $W_i$, let $D_P$ and $D_{P_0}$ be the divisors on $X_t$ and $X_{t_0}$ respectively, whose cohomology classes correspond to $P$ and $P_0$ as points in the fibres of the morphism $\bcP \to U$. Then $\bar \beta (1,3)_{i,*}([P-P_0])=AJ[D_P-D_{P_0}]$ in $J_{\alg }^2(\tilde \bcX )$, see Theorems 12.4 and 12.17 in \cite{Voisin Book 1}. Next, $R^1g^{\znak }_*\QQ =0$ as the fibres $W_t^{\znak }$ are $0$-dimensional, and $(R^3(\tilde f_U)_*\QQ )^{\znak }=0$ by Assumption (A). Since the corresponding Leray spectral sequences $E_2$-degenerate, we obtain the isomorphisms $$ H^1(U,R^0g^{\znak }_*\QQ )\simeq H^1(W^{\znak },\QQ ) \quad \hbox{and}\quad H^1(U,(R^2(\tilde f_U)_*\QQ )^{\znak })\simeq H^3(\tilde \bcX _U,\QQ )^{\znak }\; . $$ Due to the above description of the action of $\bar \beta (1,3)_{i,*}$ on $[P-P_0]$, one can check that the diagram $$ \xymatrix{ H^1(\bar W^{\znak },\QQ ) \ar[dd]^-{r_1} \ar[rr]^-{\bar \beta (1,3)_*} & & H^3(\tilde \bcX ,\QQ )^{\znak } \ar[dd]^-{r_2} \\ \\ H^1(W^{\znak },\QQ ) \ar[rr]^-{\zeta _*} & & H^3(\tilde \bcX _U,\QQ )^{\znak } } $$ \noindent commutes, where $\zeta _*$ is the modification of $\alpha _*$ by means of the above two isomorphisms coming from the Leray spectral sequences, and $r_1$, $r_2$ are the restriction homomorphisms on cohomology groups. By Deligne's results, the cohomology groups at the bottom possess mixed Hodge structures with weights $$ W_0H^1(W^{\znak },\QQ )=\im (r_1)\qquad \hbox{and}\qquad W_0H^3(\tilde f^{-1}(U),\QQ )^{\znak }=\im (r_2)\; , $$ see \cite[Section 11.1.4]{Voisin Book 2}. Morphisms between mixed Hodge structures are strict with respect to both Hodge and weight filtrations, see \cite{HodgeTheoryII}. Since $\zeta _*$ is a morphism of mixed Hodge structures and it is surjective, we obtain that $$ \zeta _*(W_0H^1(W^{\znak },\QQ ))= W_0H^3(\tilde \bcX _U,\QQ )^{\znak }\; . $$ This gives that $\im (r_2)=\im (r_2\circ \bar \beta (1,3)_*)$. Then $H^3(\tilde \bcX ,\ZZ )^{\znak }$ is generated by $\ker (r_2)$ and $\im (\beta (1,3)_*)$. On the other hand, $\ker (r_2)$ is generated by the images of the homomorphisms $$ (i'_t)_*:H^1(X'_t,\QQ )^{\znak }\to H^3(\tilde \bcX ,\QQ )^{\znak }\; , $$ where $t\in \PR ^1\smallsetminus U$, $X'_t$ is the resolution of singularities of $X_t$ and $i'_t$ is the composition of the desingularization $X_t'\to X_t$ with the closed embedding $i_t:X_t\to \bcX $. All these things together give that the homomorphism $$ \theta : H^1(\bar W^{\znak },\QQ )\oplus (\oplus _{t\in \PR ^1\smallsetminus U}H^1(X'_t,\QQ )^{\znak })\to H^3(\tilde \bcX ,\QQ )^{\znak }\; , $$ induced by the homomorphisms $\bar \beta (1,3)_*$ and $(i_t)_*$, is surjective. Since $\theta $ is a homomorphism of polarized Hodge structures, it induces a surjective homomorphism of the corresponding abelian varieties $$ \rho ^{\znak }:J^{\znak }\oplus (\oplus _{t\in \PR ^1\smallsetminus U} (\bcP '_{t,\, 0})^{\znak })\to J_{\alg }^2(\tilde \bcX )^{\znak }\; , $$ where $J^{\znak }$ is the union of $J_i$'s and $\bcP '_{t,0}$ is the component of $0$ in the Picard scheme $\bcP '_t$ of the surface $X'_t$. Now, let $\tilde B$ be the exceptional divisor of the blow-up $\tilde \bcX \to \bcX $, $p:\tilde B\to B$ the projection and $e:\tilde B\to \tilde \bcX $ the embedding of $\tilde B$ into $\tilde \bcX $. Let $\varepsilon ^{\znak }:CH^2(\tilde \bcX )^{\znak }\to CH^1(B)^{\znak }$ be the composition of the pull-back $(e^*)^{\znak }:CH^2(\tilde \bcX )^{\znak }\to CH^2(\tilde B)^{\znak }$ and push-forward $p_*^{\, \znak }:CH^2(\tilde B)^{\znak }\to CH^1(B)^{\znak }$, induced by $e$ and $p$ respectively. For each $t\in \PR ^1$ let $\tilde i_t$ be the closed embedding of the fibre $X_t$ into $\tilde \bcX $, and let $(\tilde i_t)_*^{\, \znak }:CH^1(X_t)^{\znak }\to CH^2(\tilde \bcX )^{\znak }$ be the push-forward homomorphism induced by the closed embedding $\tilde i_t$. The above surjective homomorphisms $\rho ^{\znak }$ and $\epsilon ^{\znak }$, constructed on the level of Jacobians, guarantee that, on the level of Chow groups, $A^1(B)^{\znak }$ is contained in the image of the sum $$ \oplus _{t\in \PR ^1} (\varepsilon ^{\znak }\circ (\tilde i_t)_*^{\, \znak }): \oplus _{t\in \PR ^1}CH^1(X_t)^{\znak }\to CH^1(B)^{\znak } $$ A straightforward verification shows that, for each $t\in \PR ^1$, the composition $\varepsilon ^{\znak }\circ (\tilde i_t)_*^{\, \znak }$ is the restriction of the pull-back homomorphism $j_t^*:CH^1(X_t)\to CH^1(B)$ onto the $\znak $-parts of the Chow groups. This completes the proof of Theorem \ref{Voisin's theorem}. \end{pf} \begin{remark} \label{move} {\rm Theorem \ref{Voisin's theorem} can be strengthened by saying that $A^1(B)^{\znak }$ is contained in the image of the homomorphism $\oplus _{t\in \PR ^1\smallsetminus Z}CH^1(X_t)^{\znak }\to CH^1(B)^{\znak }$, where $Z$ is a finite subset in $U$. This is because we can always move a zero-cycle in its class modulo rational equivalence on $\bar W^{\znak }$. } \end{remark} \section{The $\tau $-action on $CH^2(S)$} \label{idaction} Symplectomorphisms over $\CC $ have order $\leq 8$, see \cite{Nikulin}. If $\tau $ is a Nikulin involution, then $\rho \geq 9$, where $\rho $ is the rank of the N\'eron-Severi group $NS(X)$, see \cite{GeemenSarti}. Assume that $\rho =9$ and let $L$ be a generator of the orthogonal complement of the lattice $E_8(-2)$ in $NS(X)$ whose self-intersection is $2d$, for some positive integer $d$. Let $\Gamma $ be a direct sum of $\ZZ L$ and $E_8(-2)$, if the integer $d$ is odd, or the unique even lattice containing $\ZZ L\oplus E_8(-2)$ as a sublattice of index $2$, if $d$ is even. For each $\Gamma $ there exists a $K3$-surface $X$ with a Nikulin involution and $\rho =9$, such that $NS(X)\simeq \Gamma $, and all such surfaces are parametrized by a coarse moduli space of dimension $11$, see \cite{GeemenSarti}, Proposition 2.3. Let $S_0$ be a $K3$-surface over $\CC $ with a Nikulin involution $\tau $, such that $\rho =9$ and $d=3$. In this case the generator $L$ gives the regular embedding $\phi _L:S_0\to \PR ^4$, which identifies $S_0$ with the complete intersection of nonsingular cubic $\bcC _0$ and quadric $\bcQ _0$ in $\PR ^4$. The involution $\tau $ extends to the involution $\tau _{\PR ^4}$ on the whole projective space $\PR ^4$. In suitable coordinates, $\tau _{\PR ^4}$ sends $(x_0:x_1:x_2:x_3:x_4)$ to $(-x_0:-x_1:x_2:x_3:x_4)$. The cubic $\bcC _0$ and quadric $\bcQ _0$ are both invariant under $\tau _{\PR ^4}$. Vice versa, if $\bcC _0$ and $\bcQ _0$ are general nonsingular cubic and quadric in $\PR ^4$, both invariant under the involution $\tau _{\PR ^4}$, and such that their intersection $S_0=\bcC _0\cap \bcQ _0$ is nonsingular, then $S_0$ is a $K3$-surface with the Nikulin involution $\tau =\tau _{\PR ^4}|_{S_0}$, see Section 3.3 in \cite{GeemenSarti}. For short, we will write $\tau $ for $\tau _{\PR ^4}$ and for the involution on $S_0$ simultaneously. The fixed locus of $\tau $ on $\PR ^4$ is the disjoint union of the line $l_{\tau }$ and the plane $\Pi _{\tau }$ given by the equations \begin{equation} \label{tau-invariant} l_{\tau } : x_2 = x_3 = x_4 = 0\; ,\qquad \hbox{and}\qquad \Pi _{\tau } : x_0 = x_1 = 0\; . \end{equation} Let $V$ be a vector space, such that $\PR ^4=\PR (V)$. In coordinate-free terms, $\tau $ lifts to the involution $\tau :V\to V$ which induces two involutions $\tau _i:\Sym ^iV^{\vee }\to \Sym ^iV^{\vee }$, where $i=2,3$, $\Sym ^i$ is the $i$-th symmetric power of a vector space over $k$, and $V^{\vee }$ is the $k$-vector space dual to $V$. Consider the subspaces $$ (\Sym ^iV^{\vee })_+ = \{ F\in \Sym ^iV^{\vee }\; |\; \tau _i(F)=F\} \; . $$ \noindent for $i=2,3$. Any $F\in (\Sym ^2V^{\vee })_+$ has the shape \begin{equation} \label{S2+} \alpha_{00}x_0^2+\alpha_{11}x_1^2+\alpha_{01}x_0x_1+ f_2(x_2,x_3,x_4) \end{equation} and $\Phi \in (\Sym ^3V^{\vee })_+$ has the shape \begin{equation} \label{S3+} l_{00}(x_2,x_3,x_4)x_0^2+l_{11}(x_2,x_3,x_4)x_1^2+l_{01}(x_2,x_3,x_4)x_0x_1 +f_3(x_2,x_3,x_4)\; , \end{equation} where $\alpha _{ij}$ are constants, $l_{ij}$ are linear forms, $f_2$ and $f_3$ are homogeneous polynomials of degree $2$ and $3$ respectively. If $$ \bcL _2 = \PR ((\Sym ^2V^{\vee })_+)\quad \hbox{and}\quad \bcL _3 = \PR ((\Sym ^3V^{\vee })_+)\; , $$ then $\bcQ _0\in \bcL _2\subset |\bcO (2)|$ and $\bcC_0\in \bcL _3\subset |\bcO (3)|$. The explicit formulae (\ref{tau-invariant}) and (\ref{S3+}) show that any cubic $\bcC \in \bcL _3$ contains the line $l_{\tau }$. From (\ref{S3+}) it follows that $\bcL _3$ is spanned by the subsystem $$ \bcL _{3,i}=\PR (V_i)\; ,\qquad i=1,2\; , $$ where $V_1$ is the subspace of forms in $\Sym ^3V^{\vee }$ of the shape $$ l_{00}(x_2,x_3,x_4)x_0^2+l_{11}(x_2, x_3, x_4)x_1^2+ l_{01}(x_2, x_3, x_4)x_0x_1 $$ and $V_2$ is the subspace of forms in $\Sym ^3V^{\vee }$ of the shape $$ f_3(x_2,x_3,x_4)\; , $$ and the forms $l_{ij}$, $f_3$ are those described above. The subgroup $$ G=\{ g\in \PGL (5)\; \, |\; \, g(l_{\tau })= l_{\tau }\; , g(\Pi _{\tau })=\Pi _{\tau }\} $$ acts transitively on the set $\PR ^4\smallsetminus (l_{\tau }\sqcup \Pi _{\tau })$. It also acts naturally on the linear system $|\bcO (3)|$ and fixes the subspaces $\bcL _{3,1}$ and $\bcL _{3,2}$ in it. Then $\bcL _3$ is fixed under the $G$-action too. Let $P_0=(1:1:1:1:1)$ be the point in $\PR ^4$. From (\ref{S3+}) we have that the subsystem $\{ \bcC \in \bcL _3\; |\; P_0\in \bcC \} $ is a proper hyperplane in $\bcL _3$. Since $G$ acts transitively on $\PR ^4\smallsetminus(l_{\tau }\sqcup \Pi _{\tau })$, it follows that, for any point $P\in \PR ^4\smallsetminus (l_{\tau }\sqcup \Pi _{\tau })$ the subsystem $\{ \bcC \in \bcL _3\; |\; P\in \bcC \} $ is also a proper hyperplane of $\bcL $. This shows that the line $l_{\tau }$ is the base locus of the linear system $\bcL _3$. Consider the linear system $$ \bcM _3=\{ \bcC \in \bcL_3 \; |\; S_0 \subset \bcC\} $$ and its subset $\bcN _3$ of the unions $\bcQ _0 \cup H\in \bcM _3$, such that $H$ is a hyperplane containing the line $l_\tau $. Then $$ \bcN _3\simeq \PR ^2\qquad \hbox{and}\qquad \bcM _3= \spn _{\bcL _3}(\bcN _3,\bcC _0) \simeq \PR ^3 $$ We also need the linear system $$ \Sigma = \{ B_1\subset S_0 \; | \; B_1=S_0 \cap \bcC _1\; , \; \; \bcC _1\in \bcL _3\smallsetminus \bcM _3\} $$ of curves on $S_0$ cut out by cubics from $\bcL _3\smallsetminus \bcM _3$. As $\bcL _3\cong \PR ^{18}$, it follows that $\Sigma \simeq \PR ^{14}$. For any point $P$ in $S_0$, the set $$ \Sigma _P=\{B_1\in\Sigma \; | \; P\in B_1\} $$ is a hyperplane in $\Sigma $ if $P\not \in S_0\cap l_{\tau }$, and $\Sigma _P=\Sigma $ otherwise. In both cases, as $\dim (\Sigma )=14$, for any two distinct points $P$ and $Q$ in $S_0$ we have that $$ \dim (\Sigma _P\cap \Sigma _Q)\geq 12\; . $$ \begin{lemma} \label{ermak} For a general choice of $\bcC _0$ and $\bcQ _0$, there is a nonempty Zariski open subset $N_0$ in $S_0$, such that, if $P_0$ is a point in $N_0$, one can find a nonempty Zariski open subset $U_0$ in $S_0$ having the property that for each point $P$ in $U_0$ there exists a nonsingular curve $B_1\in \Sigma $ passing through $P$ and $P_0$ on $S_0$. \end{lemma} \begin{pf} For a general $\bcQ _0$ in $\bcL _2$, the quadric $\bcQ _0$ intersects $l_{\tau }$ at two distinct points, say $P_+$ and $P_-$. As $l_{\tau }$ is the base locus of $\bcL _3$, the union $S_0\cap l_{\tau }=P_+\sqcup P_-$ is the base locus of the linear system $\Sigma $ on $S_0$. Let $V_+$ be the set of all triples $(\bcC _0,\bcC _1,\bcQ )\in \bcL _3\times \bcL _3\times \bcL _2$, such that $\bcC _0\neq \bcC _1$, the set $B_1=\bcC _0\cap \bcC _1\cap \bcQ _0$ has dimension $1$ in a Zariski open neighbourhood of the point $P_+$, and $B_1$ is nonsingular at $P_+$. Then $V_+$ is a Zariski open subset in $\bcL _3\times \bcL _3\times \bcL _2$. In appropriate coordinates, $P_+=(1:0:0:0:0)$ and $P_-=(0:1:0:0:0)$. Then the triple $\bcC _0=\{ x_2x_0^2=0\} $, $\bcC _1=\{ x_3x_0^2=0\} $, $\bcQ _0=\{ x_0x_1=0\} $ is in $V_+$, whence $V_+\neq \emptyset $. Similarly, one can construct the nonempty open subset $V_-$ in $\bcL _3\times \bcL _3\times \bcL _2$, regarding the point $P_-$. Joint with Bertini's theorem, this gives that, for a general choice of $\bcC _0$ and $\bcQ _0$, there is a nonempty Zariski open subset $V$ in $\Sigma $, such that each curve $B_1\in V$ is nonsingular. The set $T=\{ (P,Q)\in S_0\times S_0\, |\, \dim (\Sigma _P\cap \Sigma _Q)=12\} $ is Zariski open, and hence irreducible, in $S_0\times S_0$. The set $Z=\{ (P,Q,B_1)\in T\times \Sigma \, |\, P,Q\in B_1\} $ is Zariski closed in $T\times \Sigma $. The projection $\pi :Z\to T$ is surjective. Since $\pi $ is a $\PR ^{12}$-bundle over $T$ and $T$ is irreducible, $Z$ is irreducible too. Let $B_1$ be any curve in $V$ and $P$ be any point on $B_1$, not equal to $P_0$ or $P_1$. The set $F_P=\{ Q\in S_0\, |\, \dim (\Sigma _P\cap \Sigma _Q)>12\} $ is at most finite. If $Q\in B_1\smallsetminus F_P$ then $(P,Q,B_1)\in Z$ and $(P,Q,B_1)\in S_0\times S_0\times V$. Therefore, $W=Z\cap (S_0\times S_0\times V)$ is a nonempty Zariski open subset in the irreducible quasi-projective variety $Z$. As $\pi $ is surjective, the Zariski closure of the set $\pi (W)$ is $T$. Since, moreover, the image $\pi (W)$ is constructible, it contains a subset $T_0$, which is open and dense in $T$. Then $T_0$ is Zariski open and dense also in $S_0\times S_0$. It follows that the image of $T_0$ under the projection of $S_0\times S_0$ onto the second factor contains a nonempty Zariski open subset $N_0$. For any point $P_0\in N_0$ let then $U_0$ be the image of the set $T_0\cap (S_0\times \{ P_0\} )$ under the projection of $S_0\times S_0$ onto the first factor. \end{pf} \begin{theorem} \label{identity} Let $S$ be a general nonsingular complete intersection of cubic and quadric hypersurfaces, both invariant under the involution $\tau $ in $\PR ^4$. Then the action $\tau ^*:CH^2(S)\to CH^2(S)$ is the identity. \end{theorem} \begin{pf} In the above terms, $S=S_0$ is the intersection of $\tau $-invariant nonsingular cubic $\bcC _0$ and quadric $\bcQ _0$ in $\PR ^4$. Let $N_0$ be the nonempty Zariski open subset in $S_0$, coming from Lemma \ref{ermak}, and let $P_0$ be a point in $N_0$. As the action of $\tau ^*$ does not change the degree of $0$-cycles on $S$, to prove the theorem all we need to show is that, for any point $P$ on $S_0$, the cycle class $[P-P_0]$ is $\tau $-invariant. Let $U_0$ be the nonempty Zariski open subset in $S_0$, depending on $P_0$, also as in Lemma \ref{ermak}. By the Chow moving lemma, one can assume that $P\in U_0$. Then, by Lemma \ref{ermak}, there exists a cubic $\bcC _1$ in $\bcL _3\smallsetminus \bcM _3$, such that $\bcC _1$ passes through $P$ and $P_0$, and the curve $B_1=S_0\cap \bcC _1$ is nonsingular. Let $\bcQ =\bcQ _0$, and let $f:\bcQ \dasharrow \PR ^1$ be the pencil of $K3$-surfaces obtained by restricting the pencil $|\bcC _t|_{t\in\PR ^1}$, spanned by $\bcC _0$ and $\bcC _1$, onto the quadric $\bcQ $. The nonsingular curve $B=B_1$ is the base locus of the pencil $f$ and $P,P_0\in B$. Now, for any $t\in \PR ^1$ let $S_t=\bcC _t\cap \bcQ $, let $i_t:S_t\to \bcQ $ be the corresponding closed embedding, and let $j_t:B\to S_t$ be the closed embedding of the base locus into the fibre (without loss of generality, we may think of $S_0$ as the fibre over the point $t=0$). Let $\alpha $ be the class of $P-P_0$ in $A^1(B)$ and let $\alpha ^{\sharp }=\alpha -\tau ^*(\alpha )\in A^1(B)^{\sharp }$. To prove the theorem we need to show that ${j_0}_*(\alpha ^{\sharp })$ vanishes. The Assumption (A) is satisfied for the pencil $f$ and $\znak =\sharp $. By Theorem \ref{Voisin's theorem} and Remark \ref{move}, the cycle class $\alpha ^{\sharp }$ is a sum of cycle classes of type $j_t^*(\alpha ^{\sharp }_t)$, where $\alpha ^{\sharp }_t\in CH^1(S_t)^{\sharp }$ and $t\neq 0$. For each $t\in \PR ^1$, such that $t\neq 0$, we have the Cartesian square $$ \xymatrix{ B \ar[dd]_-{j_t} \ar[rr]^-{j_0} & & S_0 \ar[dd]^-{i_0} \\ \\ S_t \ar[rr]^-{i_t} & & \bcQ } $$ It consists of four closed embeddings, each of which is an embedding of a Cartier divisor into the target variety. This is why all the embeddings are regular, whence ${j_0}_*\circ j_t^*=i_0^*\circ {i_t}_*$, as homomorphisms from $CH^1(S_t)$ to $CH^2(S_0)$, see \cite[Section 6.2]{Fulton}. Since $\alpha ^{\sharp }$ is a sum of cycle classes $j_t^*(\alpha ^{\sharp }_t)$, it follows that ${j_0}_*(\alpha ^{\sharp })=i_0^*(\delta ^{\sharp })$ for some $\delta ^{\sharp }$ in $CH^2(\bcQ )^{\sharp }$. Since $\bcQ $ is a $3$-dimensional quadric hypersurface in $\PR ^4$, the group $CH^2(\bcQ )$ is isomorphic to $\ZZ $, with the generator represented by class of a line $L$ in $\bcQ $. As the line $\tau (L)$ is rationally equivalent to $L$ on $\bcQ $, the group $CH^2(\bcQ )^{\sharp }$ vanishes. Therefore, $\delta ^{\sharp }=0$ and hence ${j_0}_*(\alpha ^{\sharp })=0$. This finishes the proof of Theorem \ref{identity}. \end{pf} \section{The $\tau $-action on $A^2(\bcC )$} Let $\bcC $ be a general nonsingular cubic from $\bcL _3$ and consider the linear projection of $\PR ^4$ onto $\Pi _{\tau }$ from the line $l_{\tau }\subset \bcC $. Restricting the projection onto $\bcC $ we get a rational map $p:\bcC \dashrightarrow \Pi _{\tau }$. Blowing up $\bcC $ at the indeterminacy locus $l_{\tau }$ we obtain the conic bundle $$ \hat p:\hat \bcC \to \Pi _{\tau }\; . $$ Let $$ C\subset \Pi _{\tau } $$ be the discriminant curve of $\hat p$. This is an algebraic curve of degree $5$ in $\Pi _{\tau }$. Let also $\bcF $ be the Fano surface of lines on $\bcC $. Following \cite{MurreCubics}, we look at the Zariski closed subset $\bcF _0$ (respectively, $\bcF '_0$) in $\bcF $ generated by lines $l$, such that for $l$ there exists a plane $\Pi $ in $\PR ^4$ with $\bcC \cdot \Pi = 2l + l'$ (respectively, $\bcC \cdot \Pi = l + 2l'$). In loc. cit. the cubic is projected from a line belonging neither to $\bcF _0$ nor to $\bcF _0'$, so that the discriminant curve is irreducible. In our case the line $l_{\tau }$ is not in $\bcF _0$, but still is an element of $\bcF _0'$. This has the effect that the discriminant curve $C$ is reducible and consists of $2$ irreducible components, $$ C=C_2\cup C_3\; . $$ Here $C_2$ is the conic defined by the equation $$ 4l_{00}(x_2,x_3,x_4)l_{11}(x_2,x_3,x_4)- l_{01}(x_2,x_3,x_4)^2=0\; , $$ and $C_3$ is the cubic defined by the equation $$ f_3(x_2,x_3,x_4)=0 $$ in $\Pi _{\tau }$, where $l_{00}$, $l_{11}$, $l_{01}$ and $f_3$ are as in (\ref{S3+}). For the general choice of $l_{00}$, $l_{11}$, $l_{01}$ and $f_3$ the curves $C_2$ and $C_3$ are nonsingular and intersect each other transversally at $6$ distinct points in $\Pi _{\tau }$. For any point $P$ on $\Pi _{\tau }$ the span $\Pi _P$ of $P$ and $l_{\tau }$ intersects the cubic $\bcC $ along the line $l_{\tau }$ and a conic $C_P$. Since $P$, as well as the points of $l_{\tau }$ are fixed under the involution $\tau $, the involution acts in the fibres of the morphism $\hat p$. If $P\in C$ then the conic $C_P$ splits into two lines $$ l_P^+\qquad \hbox{and}\qquad l_P^-\; , $$ which coincide when $P$ belongs to $C_2\cap C_3$. Let $C'_i$ be the curve generated by the lines $l$ in the Grassmannian $\Gr (2,5)$, such that $p(l\smallsetminus l_{\tau })\in C_i$, for $i=2,3$, and let $$ C'=C'_2\cup C'_3\; . $$ Then $C'$ is a double cover of the curve $C$ which induces the involution $\iota :C'\to C'$ by transposing the lines $l_P^+$ and $l_P^-$ sitting over the points $P$ in $C$. The double covers $C_2'\to C_2$ and $C_3'\to C_3$ are ramified over the six points of intersection of $C_2$ and $C_3$, having no ramification in other points of $C'$. Each point $P\in C_2\cap C_3$ is a double point of $C$, and if $P'\in C'_2\cap C'_3$ sits over $P$, then $P'$ is a double point of $C'$, see 1.5.3 in \cite{Beauville}. This is why the curves $C'_2$ and $C'_3$ are nonsingular. The Hurwitz formula shows that the genera of the curves $C_2'$ and $C_3'$ are $2$ and $4$ respectively. The above involution $\iota $ acts component-wise, which gives two involutions $\iota _2$ on $C_2'$ and $\iota _3$ on $C_3'$. The curve $C'$ has only double point singularities lying over the six double points in $C_2\cap C_3$. These six points on $C'$ are the fixed points of the involution $\iota $. Then $(C',\iota )$ is a Beauville pair, i.e. it satisfies the condition (B) on page 100 in \cite{Shokurov}. Let $$ \bcP =\ker (\Nm )^0=(\id -\iota ^*)\Pic (C') $$ be the generalized Prymian in the sense of Beauville, see \cite{Beauville} or \cite{Shokurov}. Notice that $\bcP $ is a principally polarized abelian variety over $\CC $, loc.cit. Any closed point $P$ on $C'_2$ or $C'_3$ gives the line $L_P$ on $\bcC $. By Beauville's result, \cite[Section 3.6]{Beauville}, the correspondence $P\mapsto L_P$ induces the isomorphism $$ \bcP \simeq A^2(\bcC )\; . $$ Let also $J_i$ be the Jacobian of the curve $C'_i$, for $i\in \{ 2,3\} $. The involutions $\iota _i$ give the Primians $$ \bcP _i = (\id -\iota _i^*)J_i\; . $$ Since the genus of $C_2$ is zero, $\bcP _2$ coincides with $J_2$, which is an abelian surface over $\CC $. Let $N_6\to \dots \to N_1\to C$ be the chain of six subsequent normalizations of the curve $C$ at the six double points of $C$, and let $N'_6\to \dots \to N'_1\to C'$ be the chain of the corresponding six normalizations of the curve $C'$ at the double points lying over the double points of $C$. Each curve $N'_{i+1}$ inherits an involution from $N'_i$, and the corresponding generalized Prymians $\bcR _i$, $i=1,\dots ,6$, are abelian varieties by Theorem 3.5 in \cite{Shokurov}. Moreover, the involutions on the curves $N'_i$ satisfy the the condition (F) of Proposition 3.9 in \cite{Shokurov}, and so give the tower of isogenies $\bcP \to \bcR _1\to \dots \to \bcR _6$ by Lemma 3.15 in loc.cit. The curve $N'_6$ is the disjoint union of the curves $C'_2$ and $C'_3$, and the restrictions of the induced involution on $N'_6$ on the connected components $C'_2$ and $C'_3$ coincide with the involutions $\iota _2$ and $\iota _3$ respectively. Therefore, $$ \bcR _6=\bcP _2\oplus \bcP _3\; , $$ and we obtain the isogeny $$ \Lambda :\bcP \to \bcP _2\oplus \bcP _3\; . $$ Notice that by the same Lemma 3.15 in Shokurov's paper, the first isogeny $\bcP \to \bcR _1$ is an isomorphism because the Beauville pair $(C',\iota )$ satisfies the condition (B) and the last isogeny $\bcR _5\to \bcR _6$ is an isomorphism because $N'_6$ is disconnected. Each of the rest $4$ isogenies is of degree $2$, loc.cit. Then the total isogeny $\Lambda $ is of degree $2^4$. Since $\bcC $ is unirational, there exists the classically known rational dominant morphism $\PR ^3\dasharrow \bcC $, see \cite{ClemensGriffiths}, Appendix B. Resolving its indeterminacy, we get the dominant regular morphism $\hat \PR ^3\to \bcC $. Then $\hat \PR ^3$ is balanced by Prop. 1.2 in \cite{BV}, and $\bcC $ is balanced by Prop. 1.3 in loc.cit. It follows that the homological equivalence coincides with the algebraic one for codimension $2$ algebraic cycles on $\bcC $, see \cite[Theorem 1(ii)]{BS}. The group $H^4(\bcC ,\ZZ )$ is isomorphic to $\ZZ $ by the Lefschetz's hyperplane section theorem and the Poincar\'e duality. This gives that $$ CH^2(\bcC )=A^2(\bcC )\oplus \ZZ \; , $$ and the action induced by $\tau $ on $CH^2(\bcC )$ splits into the action on $A^2(\bcC )$ and the identity action on $\ZZ $. For any $i$ let $H^i(\bcC )$ be the cohomology of the complex cubic $\bcC $ with coefficients in $\QQ $. Recall that $$ H^1(\bcC )=H^5(\bcC )=0\; ,\quad H^2(\bcC )=H^4(\bcC ) =\QQ \quad \hbox{and}\quad H^3(\bcC )=\QQ ^{\oplus 10}\; . $$ As to Dolbeault cohomology, we have that $h^{3,0}(\bcC )=0$ and $h^{2,1}(\bcC )=5$. Let $B_i$ be the pre-image of $\bcP _i$ in $\bcP $ under the above isogeny $\Lambda $. The Prymian $\bcP $ is generated by $B_2$ and $B_3$. Identifying $\bcP $ with $A^2(\bcC )$, and $A^2(\bcC )$ with the intermediate Jacobian $J^2(\bcC )$ via the Abel-Jacobi isomorphism, we can also look at $B_2$ and $B_3$ as two subgroups generating $A^2(\bcC )$ or $J^2(\bcC )$ respectively. The genera of the curves $C_2'$ and $C_3'$ are $2$ and $4$ respectively, whence $\bcP $ is an abelian surface and $\bcP _3$ is an abelian threefold over $\CC $. Looking at the intermediate Jacobian of $\bcC $ as the quotient $$ H^{2,1}(\bcC )^{\vee }/H_3(\bcC ,\ZZ ) $$ and taking into account that any isogeny induces an isomorphism on the level of tangent spaces, we see that $\Lambda $ induces the splitting $$ H^{2,1}(\bcC )=W_2\oplus W_3\; , $$ such that the dual vector spaces $W_2^{\vee }$ and $W_3^{\vee }$ project from the tangent space to the intermedian Jacobian onto the groups $B_2$ and $B_3$ in $J^2(\bcC )$. \begin{theorem} \label{cubic action} The involution $\tau ^*:A^2(\bcC )\to A^2(\bcC )$ acts identically on $B_3$, and as the multiplication by $-1$ on $B_2$. Similarly, the induced action on $H^{2,1}(\bcC )$ splits into the identity action on $W_3^{\vee }$ and multiplication by $-1$ on $W_2^{\vee }$. \end{theorem} \begin{pf} The involution $\tau $ on the cubic $\bcC $ induces the involutions $\tau _2$ on $C'_2$ and $\tau _3$ on $C'_3$. In turn, they induce the involutions $\tau _2^*$ and $\tau _3^*$ on $J_2$ and $J_3$ respectively. Let $P$ be a point on the plane $\Pi _{\tau }$, let $\Pi _P$ be the span of $P$ and $l_{\tau }$, and look at the equation of the cubic $\bcC $, $$ l_{00}(x_2,x_3,x_4)x_0^2+l_{11}(x_2,x_3,x_4)x_1^2+ l_{01}(x_2,x_3,x_4)x_0x_1 +f_3(x_2,x_3,x_4)=0\; , $$ see (\ref{S3+}) above. Under an appropriate change of the coordinates $x_3$ and $x_4$, keeping the coordinates $x_0$, $x_1$ and $x_2$ untouched, the plane $\Pi _P$ will be given by the equation $$ \Pi _P : x_3=x_4=0\; . $$ Herewith, as the coordinates $x_0$, $x_1$ and $x_2$ remain the same, the involution $\tau $ in $\PR ^4$ can be expressed by the same formula, so that the equations for $l_{\tau }$ and $\Pi _{\tau }$ remain the same too (see Section \ref{idaction}). Substituting $x_3=x_4=0$ into the above equation for the cubic $\bcC $, we obtain the equation for the fibre $$ \Pi _P\cap \bcC $$ of the projection $$ p:\bcC \dashrightarrow \Pi _{\tau } $$ \noindent over the point $P=(0:0:1:0:0)$ of the intersection of two planes $\Pi _P$ and $\Pi _{\tau }$. Namely, $\Pi _P\cap \bcC $ is given by the equation $$ x_2(\alpha x_0^2+\beta x_1^2+\gamma x_0x_1+\delta x_2^2)=0\; , $$ where $\alpha $, $\beta $, $\gamma $ and $\delta $ are some numbers in $\CC $. If a point $Q=(a_0:a_1:a_2:0:0)$ in $\Pi _P\cap \bcC $ is such that $a_2=0$ then $Q$ sits on the line $l_{\tau }$. As we are interested in the fibre of the projection from $\bcC \smallsetminus l_{\tau }$ we must set $x_2\neq 0$. Then, if $C_P$ is the Zariski closure of the set $(\Pi _P\cap \bcC )\smallsetminus l_{\tau }$ in $\bcC $, the curve $C_P$ is the conic defined by the equation $$ \alpha x_0^2+\beta x_1^2+\gamma x_0x_1+\delta x_2^2 =0 $$ in $\Pi _P$. The point $P$ is in $C$ if and only if $C_P=l_P^++l_P^-$. Moreover, $$ P\in C_3 \Leftrightarrow \delta =0\qquad \hbox{and} \qquad P\in C_2\smallsetminus C_3 \Leftrightarrow \delta \neq 0\; . $$ Then we see that, if $P\in C_3$, the lines $l_P^+$ and $l_P^-$ meet the plane $\Pi _{\tau }$ at the point $P$. It follows then that $\tau _3^*=\id $. Suppose $P\in C'_2$. Since $C_P$ splits, $$ \alpha x_0^2+\beta x_1^2+\gamma x_0x_1+\delta x_2^2= \delta (x_2+b_0x_0+b_1x_1)(x_2-b_0x_0-b_1x_1)\; , $$ so that the lines of $C_P$ are defined by the equations $$ l_P^+ : x_2+b_0x_0+b_1x_1=0 \qquad \hbox{and} \qquad l_P^- : x_2-b_0x_0-b_1x_1=0\; , $$ which show that $\tau (l_P^+)=l_P^-$ and $\tau (l_P^-)=l_P^+$. Thus, we obtain that the involution $\tau _2^*$ coincides with the involution $\iota _2^*$ on $J_2$, while the involution $\tau _3^*$ is the identity on $J_3$. This means that the action of $\tau ^*$ on the Prymian $\bcP _2$ is the multiplication by $-1$, and the action of $\tau ^*$ on the Prymian $\bcP _3$ is the identity. It follows that $\tau $ acts as multiplication by $-1$ on $B_2$ and identically on $B_3$. Looking at the tangent spaces, we also claim that $\tau $ induces the multiplication by $-1$ on $W_2^{\vee }$ and the identity on $W_3^{\vee }$. \end{pf} Let now $\bcQ _1$ be yet another nonsingular $\tau $-invariant quadric in $\bcL _2$ and consider the pencil $\bcC \dasharrow \PR ^1$, which is the restriction of the linear system $|\bcQ _t|_{t\in \PR ^1}$ spanned by $\bcQ _0$ and $\bcQ _1$ onto $\bcC $. Let $\eta $ be the generic point of $\PR ^1$, $\bcC _{\eta }$ the generic fibre and $\bcC _{\bar \eta }$ the geometric generic fibre of the pencil $\bcC \dasharrow \PR ^1$. There exists a countable subset $Z$ in $\PR ^1$, such that the fibre $\bcC _{\bar \eta }$ is isomorphic, as a scheme over a subfield in $\CC (\PR^1)$, to the closed fibre $\bcC _t$, for all $t$ in $\PR ^1\smallsetminus Z$. Actually, $Z$ is the collection of all points with algebraic coordinate in $\AF ^1$ and the point at infinity $\infty $. The isomorphisms between $\bcC _{\bar \eta }$ and $\bcC _t$, for $t\in \PR ^1\smallsetminus Z$, commute with the action of the involution $\tau $. Then, by Theorem \ref{identity}, the action of $\tau ^*$ on $A^2(\bcC _{\bar \eta })$ is the identity. Let $g:\bcC _{\eta }\to \bcC $ be the scheme-theoretical morphism of the generic fibre $\bcC _{\eta }$ into the cubic $\bcC $. Then $g$ induces the pull-back homomorphism $g^*:A^2(\bcC )\to A^2(\bcC _{\eta })$. This homomorphism is surjective, because any $\eta $-rational point on the surface $\bcC _{\eta }$ has its closure in the scheme $\bcC $. Now, for any abelian group $A$ let $A_{\QQ }$ be the tensor product of $A$ with $\QQ $ over $\ZZ $. Theorem \ref{cubic action} implies that $B_2\cap B_3$ is a $2$-torsion subgroup in $A^2(\bcC )$. Therefore $A^2(\bcC )_{\QQ }$ is the direct sum of ${B_2}_{\QQ }$ and ${B_3}_{\QQ }$. By the same theorem, the involution $\tau ^*$ acts identically on ${B_3}_{\QQ }$ and as multiplication by $-1$ on ${B_2}_{\QQ }$. Let $g^*_{\QQ }$ be the homomorphism induced by $g^*$ after tensoring with $\QQ $. The rational Chow group $A^2(\bcC _{\eta })_{\QQ }$ is embedded into $A^2(\bcC _{\bar \eta })$. The action of $\tau ^*$ on $A^2(\bcC _{\bar \eta })$ is the identity. This is why the action of $\tau ^*$ on $A^2(\bcC _{\eta })_{\QQ }$ is the identity too. Since $\tau ^*$ acts as multiplication by $-1$ on ${B_2}_{\QQ }$, we obtain that $g^*_{\QQ }({B_2}_{\QQ })=0$. It means, in particular, that the rational Chow group $A^2(\bcC _{\eta })_{\QQ }$ of the $K3$-surface $\bcC _{\eta }$ over $\CC (\PR ^1)$ is covered by the $\tau ^*$-invariant component ${B_2}_{\QQ }$ of $A^2(\bcC )_{\QQ }$, which is isomorphic to the $\QQ $-localized Prymian ${\bcP _3}_{\QQ }$. \begin{small} \end{small} \begin{small} {\sc Department of Mathematical Sciences, University of Liverpool, Peach Street, Liverpool L69 7ZL, England, UK} \end{small} \begin{footnotesize} {\it E-mail address}: {\tt [email protected]} \end{footnotesize} \begin{small} {\sc Department of Mathematics, Yaroslavl State University, 108 Respublikanskaya str., Yaroslavl 150000, RUSSIA} \end{small} \begin{footnotesize} {\it E-mail address}: {\tt [email protected], [email protected]} \end{footnotesize} \end{document}
arXiv
Journal of the American Mathematical Society Published by the American Mathematical Society, the Journal of the American Mathematical Society (JAMS) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Journal of the American Mathematical Society is 4.79. Journals Home eContent Search About JAMS Editorial Board Author and Submission Information Journal Policies Subscription Information On a problem by Steklov by A. Aptekarev, S. Denisov and D. Tulyakov PDF J. Amer. Math. Soc. 29 (2016), 1117-1165 Request permission Given any $\delta \in (0,1)$, we define the Steklov class $S_\delta$ to be the set of probability measures $\sigma$ on the unit circle $\mathbb {T}$, such that $\sigma '(\theta )\geqslant \delta /(2\pi )>0$ for Lebesgue almost every $\theta \in [0,2\pi )$. One can define the orthonormal polynomials $\phi _n(z)$ with respect to $\sigma \in S_\delta$. In this paper, we obtain the sharp estimates on the uniform norms $\|\phi _n\|_{L^\infty (\mathbb T)}$ as $n\to \infty$ which settles a question asked by Steklov in 1921. As an important intermediate step, we consider the following variational problem. Fix $n\in \mathbb N$ and define $M_{n,\delta }=\sup \limits _{\sigma \in S_\delta }\|\phi _n\|_{L^\infty (\mathbb T)}$. Then, we prove \[ C(\delta )\sqrt n < M_{n,\delta }\leqslant \sqrt {\frac {n+1}\delta } . \] A new method is developed that can be used to study other important variational problems. For instance, we prove the sharp estimates for the polynomial entropy in the Steklov class. M. U. Ambroladze, On the possible rate of growth of polynomials that are orthogonal with a continuous positive weight, Mat. Sb. 182 (1991), no. 3, 332–353 (Russian); English transl., Math. USSR-Sb. 72 (1992), no. 2, 311–331. MR 1110069, DOI 10.1070/SM1992v072n02ABEH001269 A. I. Aptekarev, V. S. Buyarov, and I. S. Degeza, Asymptotic behavior of $L^p$-norms and entropy for general orthogonal polynomials, Mat. Sb. 185 (1994), no. 8, 3–30 (Russian, with Russian summary); English transl., Russian Acad. Sci. Sb. Math. 82 (1995), no. 2, 373–395. MR 1302621, DOI 10.1070/SM1995v082n02ABEH003571 A. I. Aptekarev, J. S. Dehesa, and A. Martinez-Finkelshtein, Asymptotics of orthogonal polynomial's entropy, J. Comput. Appl. Math. 233 (2010), no. 6, 1355–1365. MR 2559324, DOI 10.1016/j.cam.2009.02.056 George E. Andrews, Richard Askey, and Ranjan Roy, Special functions, Encyclopedia of Mathematics and its Applications, vol. 71, Cambridge University Press, Cambridge, 1999. MR 1688958, DOI 10.1017/CBO9781107325937 B. Beckermann, A. Martínez-Finkelshtein, E. A. Rakhmanov, and F. Wielonsky, Asymptotic upper bounds for the entropy of orthogonal polynomials in the Szegő class, J. Math. Phys. 45 (2004), no. 11, 4239–4254. MR 2098132, DOI 10.1063/1.1794842 S. Bernstein, Sur les polynomes orthogonaux relatifs $\grave {\rm {a}}$ un segment fini, J. Mathem$\acute {\rm {a}}$tiques, 9 (1930), no. 4, 127–177. 10 (1931), pp. 219–286. S. Denisov, On the size of the polynomials orthonormal on the unit circle with respect to a measure which is a sum of the Lebesgue measure and $p$ point masses. To appear in Proceedings of the AMS. S. Denisov and S. Kupin, On the growth of the polynomial entropy integrals for measures in the Szegő class, Adv. Math. 241 (2013), 18–32. MR 3053702, DOI 10.1016/j.aim.2013.03.014 P. Duren, Theory of $H^p$ Spaces, Dover Publications, Mineola, NY, 2000. Ya. L. Geronimus, Polynomials Orthogonal on the Circle and on the Interval, International Series of Monographs on Pure and Applied Mathematics, vol. 18, Pergamon Press, New York-Oxford-London-Paris, 1960. GIFML, Moscow, 1958 (in Russian). Ja. L. Geronīmus, Some estimates of orthogonal polynomials and the problem of Steklov, Dokl. Akad. Nauk SSSR 236 (1977), no. 1, 14–17 (Russian). MR 0467147 Ja. L. Geronimus, The relation between the order of growth of orthonormal polynomials and their weight function, Mat. Sb. (N.S.) 61 (103) (1963), 65–79 (Russian). MR 0160069 Ja. L. Geronimus, On a conjecture of V. A. Steklov, Dokl. Akad. Nauk SSSR 142 (1962), 507–509 (Russian). MR 0132965 B. L. Golinskiĭ, The problem of V. A. Steklov in the theory of orthogonal polynomials, Mat. Zametki 15 (1974), 21–32 (Russian). MR 342944 Mourad E. H. Ismail, Classical and quantum orthogonal polynomials in one variable, Encyclopedia of Mathematics and its Applications, vol. 98, Cambridge University Press, Cambridge, 2005. With two chapters by Walter Van Assche; With a foreword by Richard A. Askey. MR 2191786, DOI 10.1017/CBO9781107325982 A. Kroó and D. S. Lubinsky, Christoffel functions and universality in the bulk for multivariate orthogonal polynomials, Canad. J. Math. 65 (2013), no. 3, 600–620. MR 3043043, DOI 10.4153/CJM-2012-016-x Doron S. Lubinsky, A new approach to universality limits involving orthogonal polynomials, Ann. of Math. (2) 170 (2009), no. 2, 915–939. MR 2552113, DOI 10.4007/annals.2009.170.915 Attila Máté, Paul Nevai, and Vilmos Totik, Szegő's extremum problem on the unit circle, Ann. of Math. (2) 134 (1991), no. 2, 433–453. MR 1127481, DOI 10.2307/2944352 Paul G. Nevai, Orthogonal polynomials, Mem. Amer. Math. Soc. 18 (1979), no. 213, v+185. MR 519926, DOI 10.1090/memo/0213 Paul Nevai, John Zhang, and Vilmos Totik, Orthogonal polynomials: their growth relative to their sums, J. Approx. Theory 67 (1991), no. 2, 215–234. MR 1133061, DOI 10.1016/0021-9045(91)90019-7 George Pólya and Gábor Szegő, Problems and theorems in analysis. I, Corrected printing of the revised translation of the fourth German edition, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 193, Springer-Verlag, Berlin-New York, 1978. Series, integral calculus, theory of functions; Translated from the German by D. Aeppli. MR 580154 E. A. Rahmanov, Steklov's conjecture in the theory of orthogonal polynomials, Mat. Sb. (N.S.) 108(150) (1979), no. 4, 581–608, 640 (Russian). MR 534610 E. A. Rahmanov, Estimates of the growth of orthogonal polynomials whose weight is bounded away from zero, Mat. Sb. (N.S.) 114(156) (1981), no. 2, 269–298, 335 (Russian). MR 609291 G. Szegő, Orthogonal Polynomials, (fourth edition), Amer. Math. Soc. Colloq. Publ., vol. 23, American Mathematical Society, Providence, RI, 1975. B. Simon, Orthogonal polynomials on the unit circle, Vols. 1 and 2, American Mathematical Society, Providence, RI, 2005. V. A. Steklov, Une methode de la solution du probleme de development des fonctions en series de polynomes de Tchebysheff independante de la theorie de fermeture, Izv. Rus. Ac. Sci. (1921), 281–302, 303–326. P. K. Suetin, V. A. Steklov's problem in the theory of orthogonal polynomials, Mathematical analysis, Vol. 15 (Russian), Akad. Nauk SSSR Vsesojuz. Inst. Nau�n. i Tehn. Informacii, Moscow, 1977, pp. 5–82 (Russian). MR 0493142 Vilmos Totik, Christoffel functions on curves and domains, Trans. Amer. Math. Soc. 362 (2010), no. 4, 2053–2087. MR 2574887, DOI 10.1090/S0002-9947-09-05059-4 Vilmos Totik, Asymptotics for Christoffel functions for general measures on the real line, J. Anal. Math. 81 (2000), 283–303. MR 1785285, DOI 10.1007/BF02788993 A. Zygmund, Trigonometric series. Vol. I, II, 3rd ed., Cambridge Mathematical Library, Cambridge University Press, Cambridge, 2002. With a foreword by Robert A. Fefferman. MR 1963498 Retrieve articles in Journal of the American Mathematical Society with MSC (2010): 42C05, 33D45 Retrieve articles in all journals with MSC (2010): 42C05, 33D45 A. Aptekarev Affiliation: Keldysh Institute for Applied Mathematics, Russian Academy of Sciences, Miusskaya pl. 4, 125047 Moscow, Russia MR Author ID: 192572 Email: [email protected] S. Denisov Affiliation: Mathematics Department, University of Wisconsin–Madison, 480 Lincoln Drive, Madison, Wisconsin 53706; and Keldysh Institute for Applied Mathematics, Russian Academy of Sciences, Miusskaya pl. 4, 125047 Moscow, Russia Email: [email protected] D. Tulyakov Email: [email protected] Received by editor(s): March 17, 2014 Received by editor(s) in revised form: November 12, 2014, July 26, 2015, and October 12, 2015 Published electronically: December 24, 2015 Additional Notes: The work on section 3, which was added in the second revision, July 26, 2015, was supported by Russian Science Foundation grant RSCF-14-21-00025. The research of the first and the third authors on the rest of the paper was supported by Grants RFBR 13-01-12430 OFIm, RFBR 14-01-00604 and Program DMS RAS. The work of the second author on the rest of the paper was supported by NSF Grants DMS-1067413, DMS-1464479. Journal: J. Amer. Math. Soc. 29 (2016), 1117-1165 MSC (2010): Primary 42C05; Secondary 33D45 DOI: https://doi.org/10.1090/jams/853
CommonCrawl
\begin{document} \title{Toeplitz flows and model sets} \author{M.~Baake} \address{Faculty of Mathematics, Bielefeld University, Germany} \email{[email protected]} \author{T.~J\"ager} \address{Institute of Mathematics, Friedrich Schiller University Jena, Germany} \email{[email protected]} \author{D.~Lenz} \address{Institute of Mathematics, Friedrich Schiller University Jena, Germany} \email{[email protected]} \subjclass[2010]{52C23 (primary), 37B50, 37B10 (secondary)} \begin{abstract} We show that binary Toeplitz flows can be interpreted as Delone dynamical systems induced by model sets and analyse the quantitative relations between the respective system parameters. This has a number of immediate consequences for the theory of model sets. In particular, we use our results in combination with special examples of irregular Toeplitz flows from the literature to demonstrate that irregular proper model sets may be uniquely ergodic and do not need to have positive entropy. This answers questions by Schlottmann and Moody. \end{abstract} \maketitle \section{Introduction} Toeplitz flows have played an important role in the development of ergodic theory, since they provide a wide class of minimal dynamical systems that may exhibit a variety of exotic properties. Accordingly, they have often been employed to clarify fundamental questions on dynamical systems and to provide examples for particular combinations of dynamical properties, such as strict ergodicity and positive entropy or minimality and absence of unique ergodicity \cite{JK69,MP79,Wil84}. The aim of this article is to make this rich source of examples available in the context of aperiodic order --- often referred to as the mathematical theory of quasicrystals --- and more specifically for the study of (general) cut and project sets. Cut and project sets, introduced by Meyer \cite{Mey} in a somewhat different context, are arguably the most important class of examples within the theory of quasicrystals. With a focus on proper (but not necessarily regular) model sets their investigation has become a cornerstone of the theory, see e.g. the survey papers \cite{Moo1,Moo2}. Recently, also the more general classes of repetitive Meyer sets \cite{Auj,KS} (see the survey \cite{ABKL} as well) and of weak model sets \cite{BHS,KR} have been studied in some detail. It is an elementary observation that any two-sided repetitive (or almost periodic) sequence $\xi=(\xi_n)^{}_{n\in\mathbb{Z}}$ of symbols 0 nd 1 can be identified with a Delone subset of $\mathbb{Z}$, for instance the one given by $\mathcal{D}(\xi)=\{n\in\mathbb{Z}\mid \xi_n=1\}$. Our main result, Theorem \ref{t.Toeplitz-CPS} in Section \ref{section:main}, shows that if $\xi$ is a Toeplitz sequence, then $\mathcal{D}(\xi)$ can always be interpreted as a model set arising from a cut and project scheme (CPS). Not surprisingly, the internal group of this CPS is chosen as the odometer associated with the Toeplitz sequence. Moreover, we provide a quantitative analysis and show that the regularity and the scaling exponents of the Toeplitz sequence can be computed either in terms of the measure of the boundary of the windows in the CPS (in the case of irregular Toeplitz flows, Section \ref{section:main}) or in terms of the box dimension of this boundary (for regular Toeplitz flows, Section \ref{section:regular}). This ties together the principal quantities of both system classes and immediately allows to answer a number of questions on model sets, which were -- to the best of our knowledge -- still open. In particular, we thus obtain that a positive measure of the boundary of the window of a cut and project set does imply neither positive topological entropy, nor the existence of multiple ergodic measures (Section \ref{section:irregular}). Moreover, using Toeplitz examples due to Downarowicz and Lacroix \cite{DL96}, we can then demonstrate that model sets may have any given countable subgroup of $\mathbb{R}\ts$ as their dynamical spectrum, provided it contains infinitely many rationals. In particular, irrational eigenvalues may occur despite the fact that the underlying odometer has only a rational point spectrum, and this further implies the existence of non-continuous eigenfunctions. It is worth mentioning that the model sets provided by our construction are minimal and satisfy the additional regularity feature of properness. In particular, they fall into both the class of repetitive Meyer sets and the class of weak model sets mentioned above. Our construction below deals with binary Toeplitz sequences, as this suffices to provide the desired counterexamples. However, clearly, similar results can be achieved for Toeplitz sequences over larger alphabets than $\{0,1\}$. Indeed, for the purposes of the present paper it is not hard to reduce the case of arbitrary Toeplitz sequences to binary sequences by identifying all letters but one. \section{Binary Toeplitz sequences and model sets}\label{section:main} Let us start by discussing Toeplitz sequences and flows. For background and references we refer the reader to \cite{Do05}. Let $\varSigma=\{0,1\}^\mathbb{Z}$ and denote by $\sigma \! : \, \varSigma\xrightarrow{\quad}\varSigma$ the left shift. Suppose $\xi\in\varSigma$ is a Toeplitz sequence, which means that for all $k\in\mathbb{Z}$ there exists $p\in\N$ such that $\xi_{k+np}=\xi_k$ for all $n\in\mathbb{Z}$. Then, the shift orbit closure $\varSigma_\xi=\overline{\{\sigma^n(\xi)\mid n\in\mathbb{Z}\}}$ is a minimal set for the shift $\sigma$, and $(\varSigma_\xi,\sigma)$ is called the \emph{Toeplitz flow} generated by $\xi$. Elements of $\varSigma_\xi$ that are not Toeplitz sequences themselves are called \emph{Toeplitz orbitals}. The latter necessarily exist in any $\varSigma_\xi$ that is built from a non-periodic Toeplitz sequence $\xi$; cf.\ \cite[Cor.~4.2]{TAO}. In the remainder of this paper, we shall restrict our attention to non-periodic Toeplitz sequences. In line with the standard literature, we call \[ \mathrm{Per}(p,\xi) \, = \, \{k\in\mathbb{Z}\mid \xi_k=\xi_{k+np} \;\: \mbox{for all}\;\: n\in\mathbb{Z}\} \] the \emph{$p$-skeleton} of $\xi$ and refer to its elements as \textit{$p$-periodic positions} of $\xi$. A $p\in\N$ is an \emph{essential period} of $\xi$ if $\mathrm{Per}(p',\xi)\neq\mathrm{Per}(p,\xi)$ for all $p'<p$. A \emph{period structure} for $\xi$ is a sequence $(p^{}_\ell)^{}_{\ell\in\N}$ of essential periods such that, for all $\ell\in\N$, $p^{}_\ell$ is an essential period of $\xi$ that divides $p^{}_{\ell+1}$ and that, together, they satisfy $\bigcup_{\ell\in\N} \mathrm{Per}(p^{}_\ell,\xi)=\mathbb{Z}$. Such a sequence always exists and can be obtained, for example, by defining $p^{}_\ell$ as the multiple of all essential periods occurring for the positions in $[-\ell, \ldots , \ell]$. The \emph{density} of the $p$-skeleton is defined as \[ D(p) \, = \, \# \bigl(\mathrm{Per}(p,\xi)\cap [0,p-1]\bigr)/p \hspace{0.5pt} . \] A Toeplitz sequence and the associated flow are called \emph{regular}, if $\lim_{\ell\to\infty} D(p^{}_\ell)=1$ and \emph{irregular} otherwise. This distinction turns out to be independent of the choice of the period structure. Given a period structure $(p^{}_\ell)_{\ell\in\N}$, we let $q^{}_\ell =p^{}_\ell/p^{}_{\ell-1}$ for $\ell\geq 1$, with the convention that $p^{}_0=1$. Then, the compact Abelian group $\varOmega=\prod_{\ell\in\N} \mathbb{Z} / {q^{}_\ell} \mathbb{Z}$ equipped with the addition defined according to the carry over rule is called the \emph{odometer group} with \emph{scale} $(q^{}_\ell)^{}_{\ell\in\N}$. We denote the Haar measure on $\varOmega$ by $\mu$ and let $\tau \! : \, \varOmega \xrightarrow{\quad}\varOmega$ with $\omega\mapsto\omega+(1,0,0, \ldots)$ denote the canonical minimal group rotation on $\varOmega$. We call $(\varOmega,\tau)$ the \emph{odometer associated{\hspace{0.5pt}\hspace{0.5pt}}\footnote{We note that period structures of Toeplitz sequences are not uniquely defined, but all the odometers with scales corresponding to different period structures of the same Toeplitz sequence are isomorphic.} to $(\varSigma_\xi,\sigma)$}. This odometer $(\varOmega,\tau)$ coincides with the maximal equicontinuous factor (MEF) of $(\varSigma_\xi,\sigma)$, and the factor map $\beta \! : \,\varSigma_\xi\xrightarrow{\quad}\varOmega$ can be defined by \[ \beta(x)\, = \, \omega \quad :\Longleftrightarrow \quad \mathrm{Per}\bigl(p^{}_\ell,\sigma^{k(\ell,\omega)}(x)\bigr) \, = \, \mathrm{Per}(p^{}_\ell,\xi)\hspace{0.5pt} , \quad \mbox{for all}\;\: \ell\in\N \hspace{0.5pt} , \] where $k(\ell,\omega)=\sum_{i=1}^\ell \omega_ip_{i-1}$; cf.\ \cite{DL96} for details. Given $w\in\prod_{i=1}^\ell \mathbb{Z} / {q_i} \mathbb{Z}$, we also let $k(\ell,w)=\sum_{i=1}^\ell w_ip_{i-1}$. \begin{remark}\label{rem:alternative} There is an alternative equivalent description of the odometer, which is actually closer to considerations in the quasicrystal literature, as e.g. in \cite{BM}. As this is instructive in the context of our construction below we shortly discuss this: Let $\varOmega'$ be the inverse limit of the system $(\mathbb{Z} / {p^{}_\ell}\mathbb{Z})^{}_{\ell \geq 0}$, so the elements of $\varOmega'$ are the sequences $(x^{}_\ell)^{}_{\ell \geq 0}$ with $ x^{}_\ell \in \mathbb{Z} / {p^{}_\ell} \mathbb{Z}$ and $x^{}_{\ell -1} = \pi^{}_{\ell} (x^{}_\ell)$ for each $\ell\in \N$. Here, $\pi^{}_\ell \! : \, \mathbb{Z} / {p^{}_\ell} \mathbb{Z} \longrightarrow \mathbb{Z} / {p^{}_{\ell -1}}\mathbb{Z} $ is the canonical projection. Then, $\varOmega'$ is an Abelian group under componentwise addition, and the map \[ \varOmega \xrightarrow{\quad} \varOmega^{\hspace{0.5pt} \prime},\; \omega \mapsto ( k(\ell,\omega))^{}_{\ell \geq 0} \hspace{0.5pt} , \] provides an isomorphism of topological groups. Under this isomorphism, the map $\tau$ on $\varOmega$ corresponds to addition of $1$ in each component of elements of $\varOmega^{\hspace{0.5pt} \prime}$. $\Diamond$ \end{remark} Let us now turn to CPSs and model sets, where we refer the reader to \cite{TAO} and references therein for background and general notions. In general, a CPS $(G,H,\mathcal{L})$ is given by a pair of locally compact Abelian groups $G,H$ together with a discrete co-compact subgroup $\mathcal{L}$ of $G\times H$ such that $\pi:G\times H\to G$ is injective on $\mathcal{L}$ and $\pi_\mathrm{int}:G\times H\to H$ maps $\mathcal{L}$ to a dense subset of $H$. Given any subset ({\em window}) $W$ of $ H$, such a CPS produces a subset of $G$, called a \emph{cut and project set}, given by \[ \mbox{\Large $\curlywedge$}(W)\ = \ \pi\left((G\times W)\cap \mathcal{L}\right) \hspace{0.5pt} . \] Such a set is called a \emph{model set} when $W$ is relatively compact with non-empty interior. In this case, $\mbox{\Large $\curlywedge$}(W)$ is always a Delone set. When, in addition, the boundary $\partial W$ of the window has zero measure in $H$, the model set is called \emph{regular}. As a standard case, one considers compact windows $W$ that satisfy $\varnothing \ne W = \overline{\mathrm{int}(W)}$, in which case they are called \emph{proper}. Since we consider Toeplitz sequences as weighted subsets of $\mathbb{Z}$, the easiest way to describe them as model sets works for a CPS of the form $(G,H,\mathcal{L})$ where $G=\mathbb{Z}$ or $\mathbb{R}\ts$ and $H = \varOmega$ is the odometer from above. So, we consider the situation summarised in the following diagram, \begin{equation*} \renewcommand{1}{1.2}\begin{array}{r@{}ccccc@{}l} & G & \xleftarrow{\,\;\;\pi\;\;\,} & \mathbb{Z} \times H & \xrightarrow{\;\pi^{}_{\mathrm{int}\;}\,} & H & \\ & \cup & & \cup & & \cup & \hspace*{-1ex} \raisebox{1pt}{\text{\footnotesize dense}} \\ & \mathbb{Z} & \xleftarrow{\, 1-1 \,} & \mathcal{L} & \xrightarrow{\; \hphantom{1-1} \;} & \pi^{}_{\mathrm{int}}(\mathcal{L}) & \\ & \| & & & & \| & \\ & L & \multicolumn{3}{c}{\hspace{-0.5pt}\xrightarrow{\qquad\qquad\;\star \;\qquad\qquad}} & {L_{}}^{\star\hspace{-0.5pt}} & \\ \end{array}\renewcommand{1}{1} \end{equation*} Further, $\mathcal{L}$ is a lattice in $G\times H$ that emerges as a diagonal embedding, \begin{equation}\label{eq:def-lat} \mathcal{L} \, := \, \{ (n, n^{\star} ) \mid n\in\mathbb{Z}\} \hspace{0.5pt} , \end{equation} where $n^{\star} := \tau^{n} (0)$ defines the so-called $\star$-map $\star \! : \, G=\mathbb{Z} \xrightarrow{\quad} H$. Clearly, the restriction of $\pi$ to $\mathcal{L}$ is one-to-one and the restriction of $\pi^{}_{\mathrm{int}}$ has dense range. Using the $\star$-map, a \emph{cut and project set} for $(G,H,\mathcal{L})$ and window $W$ can equally be written as \[ \mbox{\Large $\curlywedge$} (W) \, = \, \{ x \in L \mid x^{\star} \in W \} \ . \] Using these ingredients, our main result now reads as follows. \begin{thm}\label{t.Toeplitz-CPS} Let\/ $\varSigma=\{0,1\}^\mathbb{Z}$ and suppose\/ $\xi\in\varSigma$ is a non-periodic Toeplitz sequence with period structure\/ $(p^{}_\ell)^{}_{\ell\in\N}$. Let\/ $(\varOmega,\tau)$ be the associated odometer. Then, \[ \mathcal{D}(\xi) \, = \, \{n\in\mathbb{Z} \mid \xi_n=1 \} \] is a model set for the CPS\/ $(\mathbb{Z}, \varOmega, \mathcal{L})$ or\/ $(\mathbb{R}\ts, \varOmega, \mathcal{L})$, with the lattice\/ $\mathcal{L}$ of equation\eqref{eq:def-lat} and the\/ $\star$-map defined above. Moreover, the corresponding window\/ $W\subseteq\varOmega$ is proper and satisfies \[ \mu(\partial W) \, = \, 1-\lim_{\ell\to\infty} D(p^{}_\ell) \hspace{0.5pt} . \] \end{thm} \begin{proof} We discuss the case $(\mathbb{Z}, \varOmega, \mathcal{L})$ with $\mathcal{L}$ as in \eqref{eq:def-lat}; the other case is analogous, because we view $\mathcal{D}(\xi)$ as a subset of $\mathbb{Z}$, so that the $\mathbb{R}\ts$-action emerges from the $\mathbb{Z}$-action by a simple suspension with a constant height function. In order to derive the window $W$ for the CPS, we denote cylinder sets in $\varOmega$ either by $[w]=[w^{}_1, \ldots , w^{}_\ell]=\{\omega\in\varOmega\mid \omega_i=w_i \text{ for } 1 \leqslant i\leqslant \ell\}$ with $w\in\prod_{i=1}^\ell \mathbb{Z} / {q_i} \mathbb{Z}$ or, given $\omega\in\varOmega$ and $\ell\in\N$, by $[\omega]^{}_\ell =[\omega^{}_1, \ldots ,\omega^{}_\ell]$. Note that $\mu \bigl([w]\bigr)=1/p^{}_\ell$. Consider \[ A(\ell,s)\, = \, \Bigl\{ w\in\prod_{i=1}^\ell \mathbb{Z} / {q_i}\mathbb{Z} \, \Big| \, k(\ell,w) \text{ is a } p^{}_{\ell} \text{-periodic position of } \xi \text{ and } \xi_{k(\ell,w)} = s \Bigr\} \] with $s\in \{0,1 \}$. Then, define $U_\ell=\bigcup_{w\in A(\ell,1)} [w]$ and $V_\ell=\bigcup_{w\in A(\ell,0)} [w]$. Clearly, $U_\ell\subset U_{\ell +1}$ and $V_\ell\subset V_{\ell +1}$ hold for any $\ell$. Set $U=\bigcup_{\ell\in\N}U_\ell$ and $V=\bigcup_{\ell\in\N}V_\ell$. Now, we let $W=\overline{U}$ and claim that this window $W$ satisfies the assertions of our theorem. First, we show that our CPS $(\mathbb{Z},\varOmega,\mathcal{L})$ together with the window $W$ produces the Delone set $\mathcal{D}(\xi)$ as its model set, that is, \[ \mbox{\Large $\curlywedge$}(W)\, := \, \{ n\in\mathbb{Z} \mid \tau^n(0)\in W\} \, = \, \mathcal{D}(\xi) \hspace{0.5pt} . \] In order to do so, fix $k\in\mathbb{Z}$ and suppose $\xi_k=1$, so that $k\in\mathcal{D}(\xi)$. Let $\ell$ be the least integer such that $k$ is a $p^{}_\ell$-periodic position of $\xi$. Then, there exists a unique $k'\in[0,p^{}_\ell-1]$ such that $k'=k+n \hspace{0.5pt} p^{}_\ell$ for some $n\in\mathbb{Z}$. This $k'$ is a $p^{}_\ell$-periodic position as well and we also have $\xi_{k'}=1$. Further, we have $k'=k(\ell,w)$ for a unique $w\in \prod_{i=1}^\ell \mathbb{Z}/{q_i}\mathbb{Z}$. However, this means that we have $[w]\subseteq U\subseteq W$ by construction. Since $\tau^m(0)\in[w]$ for all $m\in k(\ell,w)+p^{}_\ell\mathbb{Z}$ (note that $\tau^{k(\ell,w)} (0) =(w,0,0,\ldots)$ and any cylinder of length $\ell$ is $p^{}_\ell$-periodic for $\tau$), we in particular have that $\tau^k(0)\in W$, so that $k\in \mbox{\Large $\curlywedge$}(W)$. In a similar way, we obtain that $\xi_k=0$ implies $\tau^k(0)\in V$ and thus $k\notin \mbox{\Large $\curlywedge$}(W)$ (note here that $V$ is open, so that $V\subseteq \varOmega\setminus W$). This proves $\mbox{\Large $\curlywedge$}(W)=\{k\in\mathbb{Z}\mid \xi_k=1\}=\mathcal{D}(\xi)$. Next, we determine the measure of $\partial W$ (and obtain properness as a byproduct). We have \[ \# \bigl(A(\ell,0)\cup A(\ell,1)\bigr)\, = \, \#\{k\in [0,p^{}_\ell-1]\mid k \textrm{ is a } p^{}_\ell\textrm{-periodic position}\} \, = \, D(p^{}_\ell)\cdot p^{}_\ell \hspace{0.5pt} . \] This means that $\mu(U_\ell\cup V_\ell) = D(p^{}_\ell)$, and thus \[ \mu(U\cup V)\, = \lim_{\ell\to\infty} \mu(U_\ell\cup V_\ell) \, = \lim_{\ell\to\infty} D(p^{}_\ell) \hspace{0.5pt} . \] Thus, it suffices to show that $\partial W=\varOmega\setminus (U\cup V)$. By openness and disjointness of $U$ and $V$, this is equivalent to $(W= \, ) \, \overline{U}= \varOmega\setminus V$ and $\overline{V} =\varOmega\setminus U$. As the situation is symmetric, we restrict to prove $W=\varOmega\setminus V$. The inclusion $W\subset \varOmega \setminus V$ is clear. It remains to show the opposite inclusion. To that end, fix $\omega\in \varOmega\setminus V$ and $\kappa\in \N$. We are going to show that $U$ intersects every cylinder neighbourhood $[\omega]_\kappa$ of $\omega$, so that $\omega\in \overline{U}=W$. As $\bigcup_{\ell\in\N} \mathrm{Per}(p^{}_\ell,\omega)=\mathbb{Z}$, there exists a least integer $\ell$ such that $k=k(\kappa,\omega_1, \ldots ,\omega_\kappa)$ is a $p^{}_\ell$-periodic position. First, suppose that $\ell\leq \kappa$. Then, $k$ is a $p_\kappa$-periodic position, and we have $[\omega]_\kappa\subseteq U_\kappa$ if $\xi^{}_k=1$ and $[\omega]_\kappa\subseteq V_\kappa$ if $\xi^{}_{k}=0$. The latter is not possible, since we assume $\omega\notin V$. Hence, we have that $[\omega]_\kappa\subseteq U$. Secondly, suppose that $\ell >\kappa$. Then, $k$ cannot be a $p_\kappa$-periodic position, and hence there exists $n\in\mathbb{Z}$ such that $k'=k+np_\kappa$ satisfies $\xi_{k'}=1$. Choose the least $\ell'\in\N$ such that $k'$ is a $p^{}_{\ell'}$-periodic position and let $v\in\prod_{i=1}^{\ell'} \mathbb{Z}/{q_i}\mathbb{Z}$ be such that $k'=k(\ell',v)\bmod p^{}_{\ell'}$. Set $k''=k(\ell',v)$. By construction, we have $[v]\subseteq U_{\ell'}\subseteq U$. At the same time, we have $v_i=\omega_i$ for all $1 \leqslant i \leqslant \kappa$, since $k''=k+np_\kappa+mp^{}_{\ell'}$ for some $m\in\mathbb{Z}$. Hence, we have that $[v]\subseteq [\omega]_\kappa$ and thus $U\cap [\omega]_\kappa\neq \varnothing$. As mentioned already, the argument is completely symmetric with respect to $U$ and $V$, and we also obtain $\overline{V}=\varOmega\setminus U$. This then implies that $U=\mathrm{int} (W)$, and since $W=\overline{U}$ by definition, we obtain that $W$ is proper. \end{proof} \section{Regular Toeplitz flows}\label{section:regular} For the case of \emph{regular} Toeplitz sequences, more information is available to relate the scaling behaviour of $D(p^{}_\ell)$ to the box dimension of the boundary of the window. In order to state the result, we assume that $d$ is a metric on $\varOmega$ that generates the product topology and is invariant under the group rotation $\tau$. Note that since cylinder sets in $\varOmega$ are mapped to cylinder sets, and $\tau$ is transitive on $\varOmega$, all cylinder sets of a given level $\ell$ have the same diameter $d_\ell$. The choice of the sequence $d_\ell$ defines the metric and is more or less arbitrary, as long as it is decreasing in $\ell$. The box dimension of $\varOmega$ depends on this choice and is given by \[ \mathrm{Dim}^{}_B(\varOmega) \, = \hspace{0.5pt} \lim_{\ell\to\infty} \frac{\log (p^{}_\ell)}{ \log (d^{}_\ell)} \hspace{0.5pt} . \] If this limit does not exist, then one defines upper and lower box dimension $\overline{\mathrm{Dim}}_B(\varOmega)$ and $\underline{\mathrm{Dim}}_B(\varOmega)$ by using the limit superior, respectively inferior. We also note that the canonical choice for the metric $d$ is given by $d^{}_\ell=p^{-1}_{\ell+1}$, but our statement is valid in general. \begin{thm}\label{t.box-dimension} Suppose that, in the situation of Theorem~$\ref{t.Toeplitz-CPS}$, we have\/ $\lim_{\ell\to\infty} D(p^{}_\ell)=1$. Then, the window\/ $W$ can be chosen such that \begin{align*} \overline{\mathrm{Dim}}^{}_{B}(\partial W) \, & = \, \left(1+\varlimsup_{\ell\to\infty} \frac{\log(1-D(p^{}_\ell))}{\log (p^{}_\ell)}\right) \overline{\mathrm{Dim}}^{}_B(\varOmega) \intertext{and} \underline{\mathrm{Dim}}^{}_{B}(\partial W) \, & = \, \left(1+\varliminf_{\ell\to\infty} \frac{\log(1-D(p^{}_\ell))}{\log (p^{}_\ell)}\right) \underline{\mathrm{Dim}}^{}_B(\varOmega) \hspace{0.5pt} . \end{align*} \end{thm} \begin{proof} The construction in the proof of Theorem~\ref{t.Toeplitz-CPS} is completely independent of the value of $\lim_{\ell\to\infty} D(p^{}_\ell)$, and in particular applies also to the regular case. Hence, we may choose the same window $W$ as above and need only to determine the box dimension of $\partial W$. To that end, note that $\partial W=\varOmega\setminus(U\cup V)$ and $U_\ell\cup V_\ell\subseteq (U\cup V)$, so that $\varOmega\setminus (U_\ell\cup V_\ell)$ contains $\partial W$. However, we have that \[ \varOmega\setminus(U_\ell\cup V_\ell) \ = \ \bigcup_{w\in\mathbb{Z}^\ell\setminus A(\ell,0)\cup A(\ell,1)} [w] \ , \] so that this set is a union of $N(\ell)=(1-D(p^{}_\ell))\cdot p^{}_\ell$ cylinders of order $\ell$. Moreover, it is not possible to cover $\partial W$ with a smaller number of such cylinders, so that $N(\ell)$ is the least number of sets of diameter $d_\ell$ needed to cover $\partial W$. Hence, we obtain \[ \overline{\mathrm{Dim}}_B(\partial W) \ = \ \varlimsup_{\ell\to\infty} \frac{\log((1-D(p^{}_\ell))\cdot p^{}_\ell)}{\log d_\ell} \ = \ \left(1+\varlimsup_{\ell\to\infty} \frac{\log(1-D(p^{}_\ell))}{\log p^{}_\ell}\right) \cdot \overline{\mathrm{Dim}}_B(\varOmega) \ . \] The analogous computation yields the relation for the lower box dimensions. \end{proof} \begin{remark} Let us point out some further properties and directions as follows. (a) As a consequence of our main theorem, a Toeplitz sequence is regular if and only if the associated model set is regular. (b) As is well-known, regular Toeplitz flows are almost one-to-one extensions of their MEF; cf.\ \cite{DL96}. In particular, they are uniquely ergodic and have pure point dynamical spectrum with continuous eigenfunctions, which separate almost all points. Therefore, the general characterisation of dynamical systems coming from regular models sets provided in \cite{BLM} directly applies to provide a model set construction for these systems. (c) As regular Toeplitz flows have pure point dynamical spectrum, they also exhibit pure point diffraction by the general equivalence theorem, see \cite{BL} and references therein for details and background. As they are uniquely ergodic, each individual sequence of the flow then exhibits the same pure point diffraction. So, the method of \cite{BM} can be used to provide a CPS for a regular Toeplitz sequence. This leads to an alternative, but equivalent, way in the spirit of Remark~\ref{rem:alternative}, as can be seen from the example of the period doubling chain in \cite{BM}. $\Diamond$ \end{remark} \section{Consequences for irregular Toeplitz flows}\label{section:irregular} There has been quite some speculation on connections between irregularity of the model set and occurrence of positive entropy or failure of unique ergodicity for the associated dynamical systems. Indeed, Schlottmann asks whether irregularity of the model set implies failure of unique ergodicity \cite{Schlottmann} and Moody has suggested that irregularity is related to positive entropy. The suggestion of Moody is recorded in \cite{Pleasants} (later subsumed in \cite{PleasantsHuck}) and is also discussed in the introduction to \cite{HR}. When combined with examples of Toeplitz systems studied in the past, our main theorem allows us to answer these speculations by presenting model sets with various previously unknown features (such as irregularity combined with unique ergodicity and zero entropy). This is discussed in this section. Let us emphasise that all Toeplitz flows are minimal and that the model sets presented below are even proper (as Theorem \ref{t.Toeplitz-CPS} provides proper model sets). Arguably among the most interesting examples in our present context are irregular Toeplitz flows with zero entropy \cite{Ox52}, as these immediately imply the following statement. \begin{cor} Positive measure of the boundary of a window of a CPS is not a sufficient criterion for positive topological entropy of the associated Delone dynamical system. \qed \end{cor} We note that there even exist irregular Toeplitz flows (and thus irregular model sets) whose word complexity is only linear \cite{GJ15}. A number of further interesting examples are available in the literature. Amongst these are the following. \begin{itemize} \item Irregular Toeplitz flows may be uniquely ergodic \cite{Wil84}, so that a window for a CPS with a boundary of positive measure does \emph{not} contradict unique ergodicity of the resulting Delone dynamical system. Moreover, the set of ergodic invariant measures of an irregular Toeplitz flow may have any cardinality. \item Any countable subgroup of $\mathbb{R}\ts$ that contains infinitely many rationals can be the dynamical spectrum of a Toeplitz flow \cite{DL96}, and thus of the Delone dynamical system arising from a CPS. As all continuous eigenvalues of a Toeplitz flow are rational, this gives in particular examples of model sets with (many) measurable eigenvalues (see \cite{DFM} as well for a recent study of Toeplitz systems of finite topological rank with measurable eigenvalues). \item Oxtoby's original example of a Toeplitz flow with zero entropy in \cite{Ox52} is not uniquely ergodic, and the same is true of the example in \cite{GJ15}. However, there also exist uniquely ergodic irregular Toeplitz flows both with and without positive entropy \cite{Wil84}. In particular, there exist irregular model sets with uniquely ergodic minimal dynamical systems with zero entropy. \end{itemize} \begin{remark} There is an emerging theory of weak model sets (see, for instance, \cite{BHS,HR,KR,JLO}) dealing with irregular windows for a given CPS. If the arising model sets satisfies a maximality condition for its density, then the associated dynamical systems will generally not be minimal. Thus, the irregular Toeplitz sequences are, in this sense, never weak model sets of maximal density. They rather provide a versatile class of examples to explore the possibilities that emerge from missing out on the maximality property. $\Diamond$ \end{remark} \end{document}
arXiv
Doubt in the statement of $n$th Derivative Test. Why curl of a vector field that is proportional to 1/r^2 equal to 0? Why is the total differential divided by the norm of h bounded? Finding $A$ such that $\nabla \times A = B$ for given $B$. Is it possible to integrate the following functions? If so, how?
CommonCrawl
Siamese method The Siamese method, or De la Loubère method, is a simple method to construct any size of n-odd magic squares (i.e. number squares in which the sums of all rows, columns and diagonals are identical). The method was brought to France in 1688 by the French mathematician and diplomat Simon de la Loubère,[1] as he was returning from his 1687 embassy to the kingdom of Siam.[2][3][4] The Siamese method makes the creation of magic squares straightforward. Publication De la Loubère published his findings in his book A new historical relation of the kingdom of Siam (Du Royaume de Siam, 1693), under the chapter entitled The problem of the magical square according to the Indians.[5] Although the method is generally qualified as "Siamese", which refers to de la Loubère's travel to the country of Siam, de la Loubère himself learnt it from a Frenchman named M.Vincent (a doctor, who had first travelled to Persia and then to Siam, and was returning to France with the de la Loubère embassy), who himself had learnt it in the city of Surat in India:[5] "Mr. Vincent, whom I have so often mentioned in my Relations, seeing me one day in the ship, during our return, studiously to range the Magical Squares after the method of Bachet, informed me that the Indians of Suratte ranged them with much more facility, and taught me their method for the unequal squares only, having, he said, forgot that of the equal" — Simon de la Loubère, A new historical relation of the kingdom of Siam.[5] The method The method was surprising in its effectiveness and simplicity: "I hope that it will not be unacceptable that I give the rules and the demonstration of this method, which is surprising for its extreme facility to execute a thing, which has appeared difficult to our Mathematicians" — Simon de la Loubère, A new historical relation of the kingdom of Siam.[5] First, an arithmetic progression has to be chosen (such as the simple progression 1,2,3,4,5,6,7,8,9 for a square with three rows and columns (the Lo Shu square)). Then, starting from the central box of the first row with the number 1 (or the first number of any arithmetic progression), the fundamental movement for filling the boxes is diagonally up and right (↗), one step at a time. When a move would leave the square, it is wrapped around to the last row or first column, respectively. If a filled box is encountered, one moves vertically down one box (↓) instead, then continuing as before. Order-3 magic squares step 1 1 . . step 2 1 . 2 step 3 1 3 2 step 4 1 3 42 step 5 1 35 42 step 6 16 35 42 step 7 16 357 42 step 8 816 357 42 step 9 816 357 492 Order-5 magic squares Step 1 1 . . . . Step 2 1 . . .3 .2 Step 3 1 5 4. 3 2 Step 4 18 57 46. 3 2 Step 5 1815 5714 4613 10123 1129 Step 6 17241815 23571416 46132022 101219213 11182529 Other sizes Any n-odd square ("odd-order square") can be thus built into a magic square. The Siamese method does not work however for n-even squares ("even-order squares", such as 2 rows/ 2 columns, 4 rows/ 4 columns etc...). Order 3 816 357 492 Order 5 17241815 23571416 46132022 101219213 11182529 Order 9 47586980112233445 57687991122334446 67788102132435456 77718203142535566 61719304152636576 16272940516264755 26283950617274415 36384960717331425 37485970812132435 Other values Any sequence of numbers can be used, provided they form an arithmetic progression (i.e. the difference of any two successive members of the sequence is a constant). Also, any starting number is possible. For example the following sequence can be used to form an order 3 magic square according to the Siamese method (9 boxes): 5, 10, 15, 20, 25, 30, 35, 40, 45 (the magic sum gives 75, for all rows, columns and diagonals). Order 3 40530 152535 204510 Other starting points It is possible not to start the arithmetic progression from the middle of the top row, but then only the row and column sums will be identical and result in a magic sum, whereas the diagonal sums will differ. The result will thus not be a true magic square: Order 3 500700300 900200400 100600800 Rotations and reflections Numerous other magic squares can be deduced from the above by simple rotations and reflections. Variations A slightly more complicated variation of this method exists in which the first number is placed in the box just above the center box. The fundamental movement for filling the boxes remains up and right (↗), one step at a time. However, if a filled box is encountered, one moves vertically up two boxes instead, then continuing as before. Order 5 23619215 101811422 17513219 41225816 11247203 Numerous variants can be obtained by simple rotations and reflections. The next square is equivalent to the above (a simple reflexion): the first number is placed in the box just below the center box. The fundamental movement for filling the boxes then becomes diagonally down and right (↘), one step at a time. If a filled box is encountered, one moves vertically down two boxes instead, then continuing as before.[6] Order 5 11247203 41225816 17513219 101811422 23619215 These variations, although not quite as simple as the basic Siamese method, are equivalent to the methods developed by earlier Arab and European scholars, such as Manuel Moschopoulos (1315), Johann Faulhaber (1580–1635) and Claude Gaspard Bachet de Méziriac (1581–1638), and allowed to create magic squares similar to theirs.[6][7] See also • Conway's LUX method for magic squares • Strachey method for magic squares Notes and references 1. Higgins, Peter (2008). Number Story: From Counting to Cryptography. New York: Copernicus. p. 54. ISBN 978-1-84800-000-1. footnote 8 2. Mathematical Circles Squared By Phillip E. Johnson, Howard Whitley Eves, p.22 3. CRC Concise Encyclopedia of Mathematics By Eric W. Weisstein, Page 1839 4. The Zen of Magic Squares, Circles, and Stars By Clifford A. Pickover Page 38 5. A new historical relation of the kingdom of Siam p.228 6. A new historical relation of the kingdom of Siam p229 7. The Zen of Magic Squares, Circles, and Stars by Clifford A. Pickover,2002 p.37
Wikipedia
\begin{document} \begin{abstract} Motivated by the pop-stack-sorting map on the symmetric groups, Defant defined an operator $\mathsf{Pop}_M : M \to M$ for each complete meet-semilattice $M$ by $$\mathsf{Pop}_M(x)=\bigwedge(\{y\in M: y\lessdot x\}\cup \{x\}).$$ This paper concerns the dynamics of $\mathsf{Pop}_{\mathrm{Tam}_n}$, where $\mathrm{Tam}_n$ is the $n$-th Tamari lattice. We say an element $x\in \mathrm{Tam}_n$ is $t$-$\mathsf{Pop}$-sortable if $\mathsf{Pop}_M^t (x)$ is the minimal element and we let $h_t(n)$ denote the number of $t$-$\mathsf{Pop}$-sortable elements in $\mathrm{Tam}_n$. We find an explicit formula for the generating function $\sum_{n\ge 1}h_t(n)z^n$ and verify Defant's conjecture that it is rational. We furthermore prove that the size of the image of $\mathsf{Pop}_{\mathrm{Tam}_n}$ is the Motzkin number $M_n$, settling a conjecture of Defant and Williams. \end{abstract} \title{The Pop-stack-sorting Operator on Tamari Lattices} \section{Introduction} Building on Knuth's stack-sorting algorithm \cite{Knuth}, West's ground-breaking work on stack-sorting map on symmetric groups \cite{West} inspired subsequent studies, including the reverse-stack-sorting map \cite{Dukes} and the pop-stack-sorting map \cite{AN}. Recently, there has been considerable attention by combinatorialists on the pop-stack sorting map \cite{ABB, ABH, CG, EG, PS}. For each complete meet-semilattice $M$, Defant defined an operator $\mathsf{Pop}_M$ that agrees with the pop-stack-sorting map when $M$ is the weak order on $S_n$ \cite{Defant_tamari}. It is defined so that $\mathsf{Pop}_M$ sends an element to the meet of itself and all elements that it covers. By definition, $M$'s minimal element $\hat{0}$ stays the same when $\mathsf{Pop}_M$ is applied. We say an element $x$ is \emph{$t$-$\mathsf{Pop}$-sortable} if $\mathsf{Pop}_M^t(x)=\hat{0}$. Pudwell and Smith \cite{PS} enumerated the number of $2$-$\mathsf{Pop}$-sortable elements in $S_n$ under the weak order. Claesson and Gu\dh mundsson \cite{CG} proved that for each fixed nonnegative integer $t$, the generating function that counts $t$-$\mathsf{Pop}$-sortable elements in $S_n$ is rational. Defant \cite{Defant_coxeter} established the analogous rationality result for the generating functions of $t$-$\mathsf{Pop}$-sortable elements of type $B$ and type $\widetilde{A}$ weak orders. Introduced in 1962, the $n$-th Tamari lattice $\mathrm{Tam}_n$ consists of semilength-$n$ Dyck paths (lattice paths from $(0,0)$ to $(n,n)$ above the diagonal $y=x$) \cite{Tamari}; its partial order will be defined in Section \ref{terminology}. There are generalizations of the definition, most notably the $m$-Tamari lattices by Bergeron and Pr\'eville-Ratelle \cite{BP} and the $\nu$-Tamari lattices introduced by Pr\'eville-Ratelle and Viennot \cite{PV}. Fundamental in algebraic combinatorics \cite{MPS}, the $n$-th Tamari lattice $\mathrm{Tam}_n$ is also isomorphic to $\mathrm{Av}_n(312)$, the lattice of $312$-pattern-avoiding permutations under the weak order of $S_n$ \cite{BW}. In this paper, we study the $\mathsf{Pop}$ operator on Tamari lattices. Let $h_t(n)$ be the number of $t$-$\mathsf{Pop}$-sortable elements in $\mathrm{Tam}_n$. A part of a conjecture by Defant \cite{Defant_tamari} is that for every fixed $t$, the generating function $\sum_{n\ge 1} h_t(n)z^n$ is rational. We confirm this statement by giving the exact formula of the generating function: \begin{theorem}\label{Catalan} Let $h_t(n)$ denote the number of $t$-$\mathsf{Pop}$-sortable Dyck paths in the $n$-th Tamari lattice $\mathrm{Tam}_n$. Then $$\sum_{n\ge 1} h_t(n)z^n=\frac{z}{1-2z-\sum_{j=2}^{t}C_{j-1}z^{j}},$$ where $C_j$ are the Catalan numbers. \end{theorem} Moreover, settling a conjecture in Defant and Williams's paper (Conjecture 11.2 (2) in \cite{DW}), we have the following theorem: \begin{theorem}\label{Motzkin} Define $\mathsf{Pop}(L; q)=\sum_{b\in \mathsf{Pop}_L(L)}q^{|\mathscr{U}_L(b)|}$, where $\mathscr{U}_L(b)$ is the set of elements of $L$ that cover $b$. Then we have $$\mathsf{Pop}(\mathrm{Tam}_{n+1}; q) = \sum_{k=0}^n\frac{1}{k+1}\binom{2k}{k}\binom{n}{2k}q^{n-k},$$ where the coefficients form OEIS sequence \cite{OEIS} A055151. In particular, when $q=1$, we have that $$|\mathsf{Pop}_{\mathrm{Tam}_n}(\mathrm{Tam}_n)|=M_{n-1},$$ where $M_n$ is the $n$-th Motzkin number (OEIS sequence \cite{OEIS} A001006). \end{theorem} Additional motivation for studying the size of the image of $\mathsf{Pop}_{\mathrm{Tam}_n}$ comes from a theorem by Defant and Williams (Theorem 9.13 in \cite{DW}). In that theorem, they proved that $|X_n|= \{y\in \mathrm{Tam}_n \mid \mathsf{Row}(y)\le y\}$, where $\mathsf{Row}$ is the rowmotion operator on $\mathrm{Tam}_n$ (which is equivalent to the Kreweras complement operator on noncrossing partitions \cite{DS}). They also showed that $|X_n|$ is the number of independent dominating sets in a certain graph associated with $\mathrm{Tam}_n$ called its \emph{Galois graph}. The paper is organized as follows. In Section \ref{terminology} we give the necessary definitions. In Section \ref{proof_result1} and Section \ref{proof_result2} we prove \cref{Catalan} and \cref{Motzkin}. \section{Definitions}\label{terminology} \subsection{Lattice basics and the $\mathsf{Pop}$ operator.} \begin{definition} A \emph{meet-semilattice} is a poset $M$ such that any two elements $x,y\in M$ have a greatest lower bound (which is called their \emph{meet}, denoted by $x\wedge y$). A \emph{lattice} $L$ is a meet-semilattice such that any two elements $x,y\in L$ also have a least upper bound (which is called their \emph{join}, denoted by $x\vee y$). A meet-semilattice is \emph{complete} if every nonempty subset $A\subset M$ has a meet.\\ Given $x,y\in M$, we say that $y$ is \emph{covered} by $x$ (denoted $y\lessdot x$) if $y<x$ and no $z\in M$ satisfies $y<z<x$. \end{definition} In this paper we only consider finite meet-semilattices, each of which has a unique minimal element $\hat{0}$. They are automatically complete. \begin{definition}[\cite{Defant_tamari}] Let $M$ be a complete meet-semilattice. Define the \emph{semilattice pop-stack-sorting operator} $\mathsf{Pop}_M:M\to M$ by $$\mathsf{Pop}_M(x)=\bigwedge(\{y\in M: y\lessdot x\}\cup \{x\}).$$ \end{definition} \begin{definition} We say an element $x$ of a complete meet-semilattice $M$ is \emph{$t$-$\mathsf{Pop}$-sortable} if $\mathsf{Pop}^t(x)=\hat{0}$. \end{definition} \subsection{Generalized Tamari lattices.} In this paper, a lattice path is a finite planar path that starts from the origin and at each step travels either up/$\rm{N}: (0,1)$ or right/$\rm{E}: (1,0)$. \begin{definition} The \emph{horizontal distance} of a point $p$ with respect to a lattice path $\nu$ is the maximum number of east steps one can take starting from $p$ before being strictly to the right of $\nu$. \end{definition} \begin{definition}[\cite{PV}] Let $\nu$ be a lattice path from $(0,0)$ to $(\ell -n,n)$. The \emph{generalized $\nu$-} \emph{Tamari lattice} $\mathrm{Tam}(\nu)$ is defined as follows: \begin{enumerate} \item elements of $\mathrm{Tam}(\nu)$ are lattice paths $\mu$ from $(0,0)$ to $(\ell-n,n)$ that are weakly above $\nu$; \item the partial order of $\mathrm{Tam}(\nu)$ is given by the covering relation: $\mu\lessdot\mu'$ if $\mu'$ is obtained by shifting a subpath $D$ of $\mu$ by $1$ unit to the left, where $D$ satisfies (i) it is preceded by $\mathrm{E}$; (ii) its first step is $\mathrm{N}$; (iii) its endpoints $p,p'$ are of the same horizontal distance to $\nu$ and there is no point between them with the same horizontal distance to $\nu$ as $p$. In other words, $\mu\lessdot \mu'$ if for such subpath $D$, $\mu=X\mathrm{E}DY$ and $\mu'=XD\mathrm{E}Y$. \end{enumerate} \end{definition} \begin{figure} \caption{Lattice path $\mu =$ NENENEEENE is in $\rm{Tam}(\nu)$ where $\nu =$ ENNEEEENNE.\quad Each point on $\mu$ is labeled with its horizontal distance.} \label{figure1} \end{figure} \begin{definition} When $\nu=(\mathrm{NE})^n$, the lattice $\mathrm{Tam}(\nu)$ is the $n$-th \emph{Tamari lattice} $\mathrm{Tam}_n$ consisting of the \emph{Dyck paths}. It is well-known that $|\mathrm{Tam}_n|$ is the $n$-th Catalan number $C_n$. \end{definition} \section{Proof of \cref{Catalan}}\label{proof_result1} \subsection{Preliminaries: the $\nu$-bracket vector.} \begin{definition}\label{bracketvector} Let $\mathbf{b}(\nu)=(b_0(\nu),b_1(\nu),\ldots, b_{\ell}(\nu))$ be the vector denoting the heights at each step of the lattice path $\nu$. Let the \emph{fixed position} $f_k$ denote the largest index such that $b_{f_k}(\nu)=k$. We say that an integer vector $\vec{\mathsf{b}}=(\mathsf{b}_0,\mathsf{b}_1,\ldots, \mathsf{b}_{\ell})$ is a \emph{$\nu$-bracket vector}, denoted as $\vec{\mathsf{b}}\in \rm{Vec}(\nu)$, if \begin{enumerate} \item $\mathsf{b}_{f_k}=k$ for all $k=0,\ldots, n$. \item $b_i(\nu) \le \mathsf{b}_i \le n$ for all $0\le i\le \ell$. \item If $\mathsf{b}_i =k$, then $\mathsf{b}_j \le k$ for all $i+1\le j\le f_k$. \end{enumerate} The partial order of $\rm{Vec}(\nu)$ is defined as follows: we say $(\mathsf{b}_0,\mathsf{b}_1,\ldots, \mathsf{b}_{\ell})\le (\mathsf{b}'_0,\mathsf{b}'_1,\ldots, \mathsf{b}'_{\ell})$ if $\mathsf{b}_i\le \mathsf{b}'_i$ for all $i$. \end{definition} \begin{remark} An equivalent interpretation of (3) is that $\vec{\mathsf{b}}$ is 121-pattern-avoiding. These conditions also imply the sequence $\{\mathsf{b}_i\}_{f_{k-1}+1}^{f_k}$ is non-increasing for all $k=0,\ldots, n$. \end{remark} \begin{definition} Let $\mu\in \mathrm{Tam}(\nu)$ be a path from $(0,0)$ to $(\ell-n,n)$. We define $\mathbf{b}(\mu)=(b_0(\mu),b_1(\mu),$ $\ldots,b_{\ell}(\mu))$ its \emph{associated vector} as follows: make $(\ell+1)$ empty slots; traverse $\mu$, and when arriving at a new grid point, write its height $k$ at the rightmost available slot among those that are weakly to the left of index $f_k$. \end{definition} \begin{remark} We alert the readers that the notation of the vector $\mathbf{b}(\mu)$ does not reflect its dependence on the fixed lattice path $\nu$. \end{remark} \begin{example}\label{associatedvec} We use $\mu=$ NENENEEENE and $\nu=$ ENNEEEENNE as in Figure \ref{figure1}. The fixed positions are $f_0=1$, $f_1=2$, $f_2=7$, $f_3=8$, and $f_4= 10$. Then we create 11 empty slots and construct the associated vector $\mathbf{b}(\mu)$ as follows: \begin{equation*}\begin{aligned}&\ \ (\underline{\ \ \ },\underline{\ 0 \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ })\to (\underline{\ 1 \ },\underline{\ 0\ },\underline{\ 1 \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ }) \\\to\ & (\underline{\ 1 \ },\underline{\ 0\ },\underline{\ 1 \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ 2 \ },\underline{\ 2 \ },\underline{\ \ \ },\underline{\ \ \ },\underline{\ \ \ }) \to (\underline{\ 1 \ },\underline{\ 0\ },\underline{\ 1 \ },\underline{\ 3\ },\underline{\ 3 \ },\underline{\ 3 \ },\underline{\ 2 \ },\underline{\ 2 \ },\underline{\ 3 \ },\underline{\ \ \ },\underline{\ \ \ }) \\\to\ & (\underline{\ 1 \ },\underline{\ 0\ },\underline{\ 1 \ },\underline{\ 3\ },\underline{\ 3 \ },\underline{\ 3 \ },\underline{\ 2 \ },\underline{\ 2 \ },\underline{\ 3 \ },\underline{\ 4 \ },\underline{\ 4 \ }). \end{aligned}\end{equation*} \end{example} \begin{theorem}[\cite{CPS}]\label{bij} The map $\mathbf{b} : \mathrm{Tam}(\nu) \to \mathrm{Vec}(\nu)$ is an order-preserving bijection. Furthermore, for any paths $\mu, \mu' \in \mathrm{Tam}(\nu)$, we have $\mathbf{b}(\mu\wedge \mu') = \min(\mathbf{b}(\mu), \mathbf{b}(\mu'))$ the term-wise minimum vector. \end{theorem} \begin{notation}We define the followings. \begin{enumerate} \item $\Delta(\mu):= \{ i\mid i<\ell \text{ and } b_i(\mu)>b_{i+1}(\mu)\}.$ \item $\eta_i(\mu):= \begin{cases*} \max\{x\in [b_i(\nu),b_i(\mu)-1] \mid b_j(\mu) \le x,\ \forall j \in [i+1,f_x]\}& if $i\in \Delta(\mu)$, \\ b_i(\mu) & if $i\not \in \Delta(\mu)$. \end{cases*}$ \item $\mathbf{b}_{\downarrow}^i(\mu):= (b_0(\mu),\ldots,b_{i-1}(\mu), \eta_i(\mu), \ldots,b_{\ell}(\mu)) $. \end{enumerate} \end{notation} \begin{example} Again we use $\mu=$ NENENEEENE as in Figure \ref{figure1} and by \cref{associatedvec} we have that $\mathbf{b}(\mu)=(1,0,1,3,3,3,2,2,3,4,4)$. Hence, $\Delta(\mu)=\{0,5\}$, $\eta_0(\mu)= 0$, and $\eta_5(\mu)= 2$. \end{example} \begin{proposition}[\cite{Defant_tamari}]\label{popeffect} We have that$$ \mathbf{b}(\mathsf{Pop}_{\mathrm{Tam}(\nu)}(\mu))=(\eta_0(\mu),\eta_1(\mu),\ldots, \eta_{\ell}(\mu)).$$ \end{proposition} \begin{corollary}[\cite{Defant_tamari}]\label{inequality} Suppose $\mu \in \mathrm{Tam}(\nu)$ and $f_{k-1} < i < f_k\ (0\le k\le n)$. Then $b_i(\mathsf{Pop}_{\mathrm{Tam}(\nu)}(\mu)) \ge b_{i+1}(\mu)$. \end{corollary} We use the assumptions for a lattice path $\nu$ from above. Let $\nu^{\#}$ be the path obtained from $\nu$ by deleting its first $f_0+1$ steps. Let $\mathsf{b}^{\#}$ be the vector obtained from $\mathsf{b}$ by deleting its first $f_0+1$ entries and subtracting $1$ from all remaining entries. We call this action the \emph{hash} map. Let $\mu^{\#}$ be the unique element in $\mathrm{Tam}(\nu^{\#})$ whose associated vector is $\mathbf{b}(\mu)^{\#}$. \begin{corollary}\label{hash} If $\mu\in \mathrm{Tam}(\nu)$ is $t$-$\mathsf{Pop}$-sortable, then so is $\mu^{\#}\in \mathrm{Tam}(\nu^{\#})$. \end{corollary} \begin{proof} This directly follows from the fact that $\eta_i(\mu)$ is determined only by $b_j(\mu)$ for $j\ge i$. \end{proof} \subsection{Proof of the result} Let $H_t(z)=\sum_{n\ge 1} h_t(n)z^n$, the generating function in \cref{Catalan}. Let $\widetilde{H}_t(z)$ be the truncated polynomial $\sum_{n= 1}^{t-1} h_t(n)z^n$. Let $G_t(z)=\sum_{n\ge 1}g_t(n)z^n$, where $g_t(n)$ denotes the $t$-$\mathsf{Pop}$-sortable irreducible elements in $\mathrm{Vec}(\nu)$ for $\nu=\mathrm{E(NE)}^{ n-1}$. In this case, using the notations from \cref{bracketvector}, we have $f_k=2k+1$, and $b_i(\nu)=\lfloor i/2 \rfloor$. Therefore, the restrictions are $\mathsf{b}_{2k+1}=k$, $ \mathsf{b}_{2k}\in \{k,k+1,\ldots, n\}$, and that if $\mathsf{b}_i=k$, then $\mathsf{b}_j\le k$ for all $j=i+1,\ldots, 2k+1$, i.e., no 121-pattern can appear. Finally, we note that $\mathrm{Vec}(\mathrm{E(NE)}^{ n-1})\cong \mathrm{Vec}(\mathrm{(NE)}^n)\cong \mathrm{Tam}_n$. \begin{definition} We say $\vec{\mathsf{b}}=(\mathsf{b}_0, \mathsf{b}_1, \ldots, \mathsf{b}_{\ell})\in \mathrm{Vec}(\nu)$ for some fixed $\nu$ is \emph{irreducible} if $\mathsf{b}_0=\mathsf{b}_{\ell}$. \end{definition} \begin{lemma}\label{decomposition} Every $\nu$-bracket vector can be decomposed into irreducible $\nu_i$-bracket vectors, where $\nu$ and each $\nu_i$ are of the form $\mathrm{E(NE)}^{k-1}$. A vector is $t$-$\mathsf{Pop}$-sortable if and only if all its irreducible components are. \end{lemma} \begin{proof} We first define the addition of two irreducible vectors $\vec{\mathsf{b}}\in \mathrm{Vec}(\mathrm{E(NE)}^{n_1-1})$ and $\vec{\mathsf{b}}'\in \mathrm{Vec}(\mathrm{E(NE)}^{n_2-1})$ as follows: $$\vec{\mathsf{b}}+\vec{\mathsf{b}}':=(\mathsf{b}_0,\mathsf{b}_1,\ldots, \mathsf{b}_{2n_1-1},\mathsf{b}'_0+n_1,\mathsf{b}'_1+n_1,\ldots, \mathsf{b}'_{2n_2-1}+n_1)\in \mathrm{Vec}(\mathrm{E(NE)} ^{n_1+n_2-1}).$$ To prove the first claim we induct on the length of the vector and note that it suffices to show that every bracket vector can be decomposed as the sum of an irreducible vector $\vec{\mathsf{b}}_{irr}$ and a shorter vector. Simply take $\vec{\mathsf{b}}_{irr}:=(\mathsf{b}_0,\mathsf{b}_1,\ldots, \mathsf{b}_{f_{\mathsf{b}_0}}).$ The second claim is clear. \end{proof} \begin{lemma}\label{decomposition_cor}Assume the notations above. Then we have $$1 +H_t(z)=\frac{1}{1-G_t(z)}.$$ \end{lemma} \begin{proof} The formula is a direct corollary of \cref{decomposition}. \end{proof} \begin{lemma}\label{hashbij} The hash map is a one-to-one correspondence between irreducible vectors in $\mathrm{Vec}(\mathrm{E(NE)}^{n-1})$ and bracket vectors in $\mathrm{Vec}(\mathrm{E(NE)}^{n-2})$. An irreducible vector $\vec{\mathsf{b}}$ is $t$-$\mathsf{Pop}$-sortable if and only if $\vec{\mathsf{b}}^{\#}$ is $t$-$\mathsf{Pop}$-sortable and $t \ge n-x_r+1,$ where $2x_r$ is the length of the last irreducible vector component of $\vec{\mathsf{b}}^{\#}$. \end{lemma} \begin{proof} Let the irreducible vector $\vec{\mathsf{b}}\in \mathrm{Vec}(\mathrm{E(NE)}^{n-1})$ be $(n,0,u_0,u_1,\ldots, u_{2n-3})$ and $\vec{\mathsf{b}}^{\#}=(u_0-1,u_1-1,\ldots, u_{2n-3}-1)\in \mathrm{Vec}(\mathrm{E(NE)}^{n-2})$. First, it is clear that from $\vec{\mathsf{b}}^{\#}$ we can recover $\vec{\mathsf{b}}$, so the hash map is a bijection. Next, if we decompose $\vec{\mathsf{b}}^{\#}$ as the sum of some (say $r$) irreducible vectors of lengths $2x_1,\ldots, 2x_r$, respectively (corresponding to elements in $\mathrm{Vec}(\nu)$ for $\nu=(\mathrm{E(NE)}^{x_i-1}),\ 1\le i\le r$), then we can write $$\vec{\mathsf{b}}=(n,0,u_0,u_1,\ldots, u_{2n-3})=(n,0,u_0,\ldots, u_0, \ldots, n-x_r,\ldots, n-x_r, n,\ldots, n).$$ The irreducible vector $\vec{\mathsf{b}}$ being $t$-$\mathsf{Pop}$-sortable is equivalent to $\vec{\mathsf{b}}^{\#}$ being $t$-$\mathsf{Pop}$-sortable and the first entry of $\vec{\mathsf{b}}$ turning $0$ after $t$ $\mathsf{Pop}$'s. Applying $\mathsf{Pop}_{\mathrm{Vec}(\mathrm{E(NE)}^{ n-1})}$ once changes the first entry from $n$ to $n-x_r$, and each subsequent $\mathsf{Pop}_{\mathrm{Vec}(\mathrm{E(NE)}^{ n-1})}$ decreases it by $1$, hence this is then equivalent to $t\ge n-x_r+1$. \end{proof} \begin{lemma}\label{species} Assume the notations above. Then we have $$G_t(z)=z\left((1+\widetilde{H}_t(z))G_t(z)+1\right).$$ \end{lemma} \begin{proof} This is a corollary of \cref{hashbij}. Since the hash map's image of the middle sub-vector $(u_0-1,\ldots, u_0-1, \ldots, n-x_r-1, \ldots, n-x_r-1)\in \mathrm{Vec}(\mathrm{E(NE)}^{n-x_r-1})$ is $t$-$\mathsf{Pop}$-sortable when $n-x_r\le t-1$ and the last irreducible component starts and ends with $n$ as well, we have justified the desired expression (adding $1$ to $\widetilde{H}_t(z)$ is to account for the $r=0$ case). \end{proof} \begin{lemma}\label{manypops} When $n\le t$, every path in $\mathrm{Tam}_n$ is $t$-$\mathsf{Pop}$-sortable. \end{lemma} \begin{proof} Consider the path's associated vector $\vec{\mathsf{b}}\in \mathrm{Vec}(\mathrm{E(NE)}^{n-1})$. For each $0\le i\le n-1$, $\mathsf{b}_{2i}$ decreases by at least $1$ each time unless $\mathsf{b}_{2i}=\mathsf{b}_{2i+1}$. Since $n\le t$, during the $t$ applications of $\mathsf{Pop}_{\mathrm{Vec}(\mathrm{E(NE)}^{ n-1})}$ this equality will be reached. This applies to all $i$, so we obtain the minimum element's associated vector. \end{proof} We are now ready to prove our first main result. \begin{proof}[Proof of \cref{Catalan}] By \cref{manypops}, $\widetilde{H}_t(z)=\sum_{n=1}^{t-1}C_nz^n$. By \cref{species}, we have that $$G_t(z)=\frac{z}{1-\sum_{n=1}^{t}C_{n-1}z^n},$$ and substituting this into \cref{decomposition_cor}, we obtain that $$H_t(z)=\frac{G_t(z)}{1-G_t(z)}=\frac{\frac{z}{1-\sum_{n=1}^{t}C_{n-1}z^n}}{1-\frac{z}{1-\sum_{n=1}^{t}C_{n-1}z^n}}=\frac{z}{1-2z-\sum_{j=2}^tC_{j-1}z^j},$$as desired. \end{proof} \section{Proof of \cref{Motzkin}}\label{proof_result2} \subsection{Preliminaries: congruence and $\mathsf{Pop}$ on subsemilattices}\label{congruencesec} \begin{definition} A \emph{lattice congruence} on a lattice $L$ is an equivalence relation $\equiv$ on $L$ such that if $x_1 \equiv x_2$ and $y_1 \equiv y_2$, then $x_1 \wedge y_1\equiv x_2 \wedge y_2$ and $x_1 \vee y_1\equiv x_2 \vee y_2$. \\ For each $x \in L$, we denote by $\pi_{\downarrow}(x)$ the minimal element of the congruence class of $x$. \end{definition} \begin{definition} A \emph{subsemilattice} of a lattice $L$ is a subset $M\subset L$ such that $x \wedge y \in M $ for all $x, y \in M$. \end{definition} \begin{theorem}\label{popcong}$($\cite{Defant_tamari}$)$ Let $L$ be a finite lattice. Let $\equiv$ be a lattice congruence on $L$ such that the set $M = \{\pi_{\downarrow}(x) \mid x \in L\}$ is a subsemilattice of $L$. Then for all $x \in M$, $$\mathsf{Pop}_M(x) = \pi_{\downarrow}(\mathsf{Pop}_L(x)).$$ \end{theorem} We now provide an example that shows how the Tamari lattice can be realized as a sublattice of $S_n$. \begin{definition} A \emph{descent} of a permutation $x=x_1\cdots x_n$ is a pair of adjacent entries $x_{i}>x_{i+1}$. A \emph{descending run} is a maximal decreasing subsequence of $x$. The \emph{pop-stack-sorting map} is the operator on $S_n$ that reverses each descending run. \end{definition} \begin{definition} The partial order of $S_n$ defined by the following covering relation is the \emph{right weak order}: a permutation $y$ is covered by permutation $x$ if $y$ is obtained by swapping one of $x$'s descents.\end{definition} \begin{definition}(\cite{HNT}) Two words $u,v$ are \emph{sylvester-adjacent} if there exist $a<b<c$ and words $X,Y,Z$ such that $u=XacYbZ$ and $v=XcaYbZ$. We write $u\lhd v$.\\ Two words $u, v$ are \emph{sylvester-congruent} if there is a chain of words $u = w_0, w_1,\ldots , w_m = v$ such that $w_i$ and $w_{i+1}$ are sylvester-adjacent for all $i$ ($w_i\lhd w_{i+1}$ or $w_{i}\rhd w_{i+1}$). \end{definition} We say that a permutation $\pi$ is \emph{$312$-avoiding} if it has no $i<j<k$ such that $x_j<x_k<x_i$, and is \emph{$\overline{31}2$-avoiding} if it has no $i<j$ such that $x_i<x_j<x_{i-1}$. Let $L=S_n$, and let $M=\mathrm{Av}_n(312)$ be the set of 312-avoiding permutations, both under the right weak order. It is established by Bj\"orner and Wachs \cite{BW} in their Theorem 9.6 (i) that $\mathrm{Av}_n(312)$ is a sublattice of $S_n$ and is isomorphic to the Tamari lattice $\mathrm{Tam}_n$. Reading \cite{Reading} observes that the sylvester-congruence is a lattice congruence for $S_n$ under the right weak order (note that $u \lhd v$ also implies $u\lessdot v$), and, furthermore, if we divide $S_n$ into sylvester-congruence classes, then each class has a unique 312-avoiding element. More precisely, $\mathrm{Av}_n(312)=\{\pi_{\downarrow}(x)\mid x\in S_n\}$. A concrete description of $\pi_{\downarrow}$ is that we can compute a chain $x=y_0\rhd y_1\rhd \cdots \rhd y_m= \pi_{\downarrow}(x)$ until we must stop (one can easily show that no $XcaYbZ$ (i.e., $\overline{31}2$) pattern implies no 312 pattern), and we remark that the exact construction of the chain does not matter, that is, regardless of the order of swapping one obtains the same eventual outcome. Therefore, \cref{popcong} tells us that $$\mathsf{Pop}_{\mathrm{Av}_n(312)}(x)=\pi_{\downarrow}(\mathsf{Pop}_{S_n}(x)).$$ This is especially helpful, given that $\mathsf{Pop}_{S_n}$ on the right hand side is equal to the easily characterized pop-stack-sorting map. \subsection{Proof of the result} \begin{theorem}\label{charMotzkin} We have that $x\in X_n=\{ \mathsf{Pop}_{\mathrm{Av}_n(312)}(\mathrm{Av}_n(312))\}$ if and only if $x=x_1x_2\cdots x_n$ has no consecutive double descents and ends with $n$. \end{theorem} \begin{proof} In this proof we interpret $\mathsf{Pop}$ as reversing all descending runs of a string (not required to be a permutation of $1$ to $m$), e.g., $\mathsf{Pop}(74513)=47153$, though we specify by using a subscript when it is indeed $\mathsf{Pop}_{S_m}$. We also recall the identity $\mathsf{Pop}_{\mathrm{Av}_n(312)}(y)=\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))$ which will be used extensively. For the ``only if'' direction, we first suppose that $x=\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))$ and we want to show that $x$ ends with $n$ and has no consecutive double descents. It is known that every permutation in the image of $\pi_{\downarrow}$ must be 312-avoiding. We first prove that the last entry must be $n$. Wherever $n$ is located for a permutation $y$, in order for it to be 312-avoiding we must have that the segment after $n$ is decreasing. Then after the effect of $\mathsf{Pop}_{S_n}$, $n$ is put at the end of the permutation and continues to stay there when we apply $\pi_{\downarrow}$ because it is never involved as $a,b,\text{ or }c$ in any $XcaYbZ$ pattern. Next we prove that there are no consecutive double descents. We use induction on the permutation length, and, with the base case being clear, we assume this claim holds for length $n-1$. Write $y=y_1y_2\cdots y_n$ and let $y_r=n$. Suppose $y_n=n$. We thus know that $\mathsf{Pop}_{S_n}(y)$ ends with $n$ and it stays at the same place under the effect of $\pi_{\downarrow}$. Using the induction hypothesis, we have that $\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))$ will end with $(n-1)n$ with no double descents. Suppose $y_{n-1}=n$. Let $y_n=k$. Let $\mathsf{Pop}_{S_n}(y)=z_1\cdots z_n$. Then $(z_{n-1},z_n)=(k,n)$ and $n$ stays at the same place throughout. We prove the following two claims: there is no 312 pattern involving $k$ after $\mathsf{Pop}_{S_n}$, and there is no 312 pattern involving $k$ at any stage in the chain of pairwise sylvester-adjacent permutations that we use to compute $\pi_{\downarrow}$. For the first claim, if there is a 312 pattern then there must be some $z_i,z_j$ such that $z_i>k>z_j$ and $i<j<n-1$. Since $\mathsf{Pop}_{S_n}$ does not change the relative position of entries in different descending runs, it must be that $z_i$ is before $z_j$ in preimage $y$. However, there is no 312 pattern initially in $y$, which is a contradiction. For the second claim, we know that $z_1\cdots z_n$ has no $z_i,z_j$ such that $z_i>k>z_j$ and $i<j<n-1$, and any swap ($XcaYbZ\to XacYbZ$) in the chain would not create such a pair as it moves a smaller element to the front of a larger element. Therefore, we can delete $k$ and $n$ from $y$ and lower the entries of values $k+1,\ldots, n-1$ by $1$ respectively in $y_1\cdots y_{n-2}$. We then have an element in $S_{n-2}$, say, $y_1'\cdots y_{n-2}'$, and can apply the induction hypothesis to it. Therefore, $\pi_{\downarrow}(\mathsf{Pop}_{S_{n-2}}(y_1'\cdots y_{n-2}'))$ ends with $n-2$ and has no double descents. Now we take this image and add $1$ to entries of values $k,\ldots,n-2$ and denote it as $x_1'\cdots x_{n-2}'$. Because of the previous paragraph we have shown that $\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))= x_1'\cdots x_{n-2}'\cdot kn$, and the entire string has no double descents. Now suppose $r\le n-2$. First we consider the case $y_{r-1}<y_{r+1}$. We have $\mathsf{Pop}_{S_n}(y)=\mathsf{Pop}_{S_{n-1}}(y_1\cdots y_{r-1}y_{r+1}\cdots y_n)n.$ Therefore, \begin{equation*}\begin{aligned} \pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))&=\pi_{\downarrow}\big(\mathsf{Pop}_{S_{n-1}}(y_1\cdots y_{r-1}y_{r+1}\cdots y_n)\cdot n\big)\\&=\pi_{\downarrow}\big(\mathsf{Pop}_{S_{n-1}}(y_1\cdots y_{r-1}y_{r+1}\cdots y_n)\big)\cdot n,\end{aligned}\end{equation*} where $\cdot$ stands for concatenation. We apply the induction hypothesis to $y_1\cdots y_{r-1}y_{r+1}\cdots y_n$, an element of $S_{n-1}$, and obtain that the first $n-1$ places of $x$ must not have consecutive double descents. Concatenating with $n$ will not change this statement, and we conclude this case. Now we suppose $y_{r-1}>y_{r+1}$. Let $y_{q}y_{q+1}\cdots y_{r-1}$ be the longest descending run that ends with $y_{r-1}$. On one hand, $$\mathsf{Pop}_{S_n}(y_1\cdots y_{r-1}ny_{r+1}\cdots y_n)=\mathsf{Pop}(y_1\cdots y_{q-1})\cdot y_{r-1}\cdots y_{q}y_n\cdots y_{r+1}n,$$ where $y_n<\cdots <y_{r+1}<y_{r-1}<\cdots < y_q$. Now we start applying the series of swaps to apply $\pi_{\downarrow}$. Notice that every swap removes a $\overline{31}2$ pattern and $y_qy_ny_{r+1}$ is one such pattern. Thus, first $y_q$ is swapped with $y_n$. Then, $y_qy_{n-1}y_{r+1}$ should also be removed, so $y_q$ is again swapped with $y_{n-1}$. We repeat the process, and after $n-r$ swaps involving $y_q$ as the $c$ in $XcaYbZ$, the permutation becomes $$\mathsf{Pop}(y_1\cdots y_{q-1})\cdot y_{r-1}\cdots y_{q+1}y_n\cdots y_{r+1}y_qn.$$ Similarly, $y_{q+1}$ is moved to the end of $y_n\cdots y_{r+1}$, right before $y_qn$, and so is $y_{q+2},\ldots, y_{r-1}$. We arrive at $$\mathsf{Pop}(y_1\cdots y_{q-1})\cdot y_n\cdots y_{r+1}y_{r-1}\cdots y_{q}n.$$ We should clarify that the process of swapping is not finished yet; what we claim is that since $\pi_{\downarrow}$ is the same for sylvester-adjacent elements, we have $$\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))=\pi_{\downarrow}\big(\mathsf{Pop}(y_1\cdots y_{q-1})\cdot y_n\cdots y_{r+1}y_{r-1}\cdots y_{q}n\big).$$ On the other hand, $$\mathsf{Pop}_{S_n}(y_1\cdots y_{r-1}y_{r+1}\cdots y_n\cdot n)=\mathsf{Pop}(y_1\cdots y_{q-1})\cdot y_{n}\cdots y_{r+1}y_{r-1}\cdots y_q\cdot n.$$ Combining these observations we obtain that \begin{equation*} \begin{aligned} \pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))&=\pi_{\downarrow}\big(\mathsf{Pop}_{S_n}(y_1\cdots y_{r-1}y_{r+1}\cdots y_n\cdot n)\big)\\&=\pi_{\downarrow}\big(\mathsf{Pop}_{S_{n-1}}(y_1\cdots y_{r-1}y_{r+1}\cdots y_n)\big)\cdot n. \end{aligned} \end{equation*} We apply the induction hypothesis to $y_1\cdots y_{r-1}y_{r+1}\cdots y_n$, an element of $S_{n-1}$, and obtain that the first $n-1$ places of $x$ must not have consecutive double descents. Concatenating with $n$ will not change this statement, and we conclude this case as well. For the ``if'' direction, we suppose that $x=x_1\cdots x_n\in S_n$ with $x_n=n$ and $x$ has no consecutive double descents. We want to show that there is some 312-avoiding permutation $y$ such that $\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))=x$. We use strong induction on $x$'s length. We consider the position of $1$, say $x_k=1$. Then there are two immediate observations. Firstly, all entries $x_1,\ldots, x_{k-1}$ are smaller than all of $x_{k+1},\ldots, x_n$ to avoid a 312 pattern $x_jx_kx_{\ell}$ where $j<k<\ell $. Hence, it is clear that $\{x_1,\ldots, x_{k-1}\}=\{2,\ldots, k\}$ and $\{x_{k+1},\ldots, x_n\}=\{k+1,\ldots, n\}$. Secondly, if $k\ge 2$, then $x_{k-1}=k$. Otherwise, if $x_j=k$ for some other $j\le k-2$, then $x_jx_{j+1}x_{j+2}$ forms either a double descents or a 312-pattern, which is impossible. We let $x_i'=x_i-1$ if $1\le i\le k-1$ and let $x_i'=x_i-k$ if $k+1\le i\le n$. Then $x_1'x_2'\cdots x_{k-1}'\in S_{k-1}$ and $x_{k+1}'x_{k+2}'\cdots x_n'\in S_{n-k}$ are two strings with no double descents, and $x_{k-1}'=k-1$, $x_n'=n-k$. Both of them satisfy the induction hypothesis, so we can find $z=z_1\cdots z_{k-1}\in S_{k-1}$ and $w=w_1\cdots w_{n-k}\in S_{n-k}$ such that $\pi_{\downarrow}(\mathsf{Pop}_{S_{k-1}}(z))=x_1'x_2'\cdots x_{k-1}'$ and $\pi_{\downarrow}(\mathsf{Pop}_{S_{n-k}}(w))=x_{k+1}'x_{k+2}'\cdots x_n'$. Let $z'=z_1'\cdots z_{k-1}'$ where $z_i'=z_i+1$. Suppose $w_t=k+1$. Let $w'=w_1'\cdots w_t'\cdot 1\cdot w_{t+1}'\cdots w_{n-k}'$, where we let $w_i'=w_i+k$. Consider $y=z'\cdot w'$. It is clear that $y$ is 312-avoiding. Indeed, $z'$ and $w'$ are both 312-avoiding, and no pattern can be formed by entries from both segments because no entry of $z'$ can be larger than any entry of $w'$ except $1$. It suffices to show that $\pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))=x$. We carefully investigate $\pi_{\downarrow}(\mathsf{Pop}(w'))$ as follows. After $\mathsf{Pop}$, $w_t=k+1$ will be after $1$, and thus for $\pi_{\downarrow}$ we can perform a series of $XcaYbZ\to XacYbZ$ swaps with $a=1$ and $b=k+1$, until $1$ is perturbed to the start of this string. In other words, due to sylvester-adjacent elements have the same $\pi_{\downarrow}$ image, $$\pi_{\downarrow}(\mathsf{Pop}(w')=\pi_{\downarrow}(1\cdot \mathsf{Pop}(w_1'\cdots w_{n-k}'))=1\cdot \pi_{\downarrow}(\mathsf{Pop}(w_1'\cdots w_{n-k}')).$$ Since no pattern can be cross-composed by entries from both $z'$ and $w'$, we have that \begin{equation*} \begin{aligned} \pi_{\downarrow}(\mathsf{Pop}_{S_n}(y))&=\pi_{\downarrow}(\mathsf{Pop}(z'))\cdot \pi_{\downarrow}(\mathsf{Pop}(w'))\\&=\pi_{\downarrow}(\mathsf{Pop}(z'))\cdot 1\cdot \pi_{\downarrow}(\mathsf{Pop}(w_1'\cdots w_{n-k}'))\\&=x_1\cdots x_{k-1}\cdot 1 \cdot x_{k+1}\cdots x_n, \end{aligned} \end{equation*} which is exactly $x$. This concludes the proof. \end{proof} \pagebreak The last ingredient that we will need in the proof of \cref{Motzkin} is the following enumerative result. \begin{theorem}$($\cite{Petersen}$)$\label{descentequalpeak} The number of 231-avoiding permutations $\pi\in S_{n+1}$ with exactly $k$ descents and $k$ peaks is $\frac{1}{k+1}\binom{2k}{k}\binom{n}{2k}$. \end{theorem} \begin{proof}[Proof of \cref{Motzkin}] Define the bijective map $r(\pi)=\pi'=\pi_1'\cdots \pi_{n+1}'$ where $\pi_i'=n+2-\pi_{n+2-i}$. We claim that the effect of $r$ preserves the number of ascents (descents) of the permutation. Indeed, place $i$ being an ascent (descent) in $\pi'$ is equivalent to place $n+1-i$ being an ascent (descent) in $\pi$, respectively. Furthermore, if in $\pi$ the descending runs are of lengths $\ell_1,\ldots, \ell_m$, then in $\pi'$ the descending runs are of lengths $\ell_m,\ldots, \ell_1$. By \cref{descentequalpeak} it suffices for us to establish a bijection between 231-avoiding permutations $\pi\in S_{n+1}$ with exactly $k$ descents and $k$ peaks and $\{r(\pi)\mid \pi\in \mathsf{Pop}_{\mathrm{Av}_n(312)}(\mathrm{Av}_n(312)),\mathscr{U}_L(\pi)\\ =n-k\}$. On one hand, take $\pi$ from the former set and we have $\mathscr{U}_L(\pi')=n-k$, as having $k$ descents is equivalent to having $n-k$ ascents for elements in $S_{n+1}.$ Here, we use the well-known fact that $\mathscr{U}_L(\pi)$ equals to the number of ascents in $\pi$. On the other hand, we will show that if $\mathscr{U}_L(\pi)=n-k$, then $r(\pi)=\pi'$ is 231-avoiding and has exactly $k$ descents and $k$ peaks. Being 231-avoiding and having $k$ descents are clear. Moreover, \cref{charMotzkin} establishes that $\pi$ has no double descents and ends with $n+1$. Therefore, $\pi'$ has no double descents either. This implies that the number of peaks of $\pi'$ is either equal to or is smaller by $1$ than the number of its descents, depending on whether the first index is a descent. Since $\pi'_{n+1}=n+2-\pi_{n+1}=1$, we know that $\pi'$ has $k$ peaks. This concludes the proof. \end{proof} \end{document}
arXiv
\begin{document} \title{Saving Entanglement via Nonuniform Sequence of $\pi$ Pulses} \author{G. S. Agarwal\footnote{Material presented in the lecture at the International Conference ``Quantum Nonstationary Systems'', at Brasilia, Brazil, Oct 2009.}} \address{Department of Physics, Oklahoma State University, Stillwater, OK - 74078, USA} \eads{\mailto{[email protected]}} \date{\today} \begin{abstract} We examine the question of survival of quantum entanglement between the bipartite states and multiparticle states like GHZ states under the action of a dephasing bath by the application of sequence of $\pi$ pulses. We show the great advantage of the pulse sequence of Uhrig [ 2007 {\it Phys. Rev. Lett.} {\bf 98} 100504] applied at irregular intervals of time, in controlling quantum entanglement. In particular death of entanglement could be considerably delayed by pulses. We use quantum optical techniques to obtain exact results. \end{abstract} \pacs{03.67.Pp, 03.65.Yz, 03.65.Ud} \vspace*{1in} \noindent{Contents} \begin{enumerate} \item[1.] Introduction\hspace{\fill}2 \item[2.] Dynamical decay of entanglement under dephasing\hspace{\fill}3 \item[3.] Single qubit coherence $S(t)$\hspace{\fill}5 \item[4.] Saving entanglement: numerical results\hspace{\fill}6 \item[5.] Conclusions\hspace{\fill}8 \end{enumerate} References\hspace{\fill}9\\ \maketitle \section {Introduction} It is well known that the quantum entanglement deteriorates very fast due to environmental interactions and one would like to find methods that can save or at least slow down the loss of entanglement \cite{Nielsen}. It is also now known that quantum entanglement can die much faster than the scale over which dephasing occurs \cite{Yu}. For example the coherence of the qubit typically lasts over a time scale of the order of $T_{2}$ whereas the entanglement can exhibit sudden death and thus it is important to extend the techniques used for single qubits to bipartite and even multipartite systems. In this paper we examine how the pulse techniques which were developed to examine the issue of dephasing can help in saving the entanglement. The quantum dynamical decoupling \cite{Viola,Ban,Facchi} uses a sequence of control pulses to be used on the system at an interval much less than the time-scale of the bath coherence time. In this way, the coupling of the system to the bath can be time-reversed and thus canceled. Such non-Markovian approach has been successfully applied to two-level systems, harmonic oscillators \cite{Vitali}. A different approach was used in \cite{Agarwal1} where a control pulse was applied to a different transition rather than the relevant two-level transition. This technique shows that the control pulse causes destructive interference between transition amplitudes at different times which leads to inhibition of the spontaneous emission of an excited atom. Similar techniques could be useful to suppress the decoherence of a qubit coupled to a thermal bath. Other methods for protection against dephasing are known. These include application of fast modulations to the bath \cite{Agarwal2} as well as decoherence free subspaces \cite{Palma}. The dynamical decoupling idea has been implemented in a few recent experiments \cite{Kishimoto} with excitons in semiconductors, with Rydberg atomic qubits, with solid state qubits and with nuclear spin qubits. More recent developments primarily due to Uhrig \cite{Uhrig,Yang,GSUhrig,Lee} go far beyond than what has been done earlier on dynamical decoupling. The dynamical decoupling schemes use a series of $\pi$ pulses applied at regular interval of times. The pulses reverse the evolution given by the Hamiltonian describing the interaction with a dephasing environment. This is because under a $\pi$ pulse the spin operator $S_{z}$ reverses sign. Uhrig discovered that $\pi$ pulses applied at irregular intervals of time are much more effective in controlling dephasing. The regular pulse sequence and the Uhrig sequence are given by \begin{equation}\label{1} \displaystyle T_{j}=\frac{jT}{n+1},T_{j}=T\sin^{2}(\frac{\pi j}{2(n+1)}). \end{equation} In this paper we focus on the utility of the sequence of pulses as discovered by Uhrig in saving quantum entanglement. Unlike other papers which focus on dephasing issues we concentrate on entanglement. This is important as the dynamical behavior of entanglement could be quite different than that of dephasing. We calculate the concurrence parameter \cite{Wootters} which characterizes the entanglement between the two qubits. We show the net time evolution of the concurrence parameter under the action of the Uhrig sequence of pulses and compare its evolution with the one given by when the uniform sequence of pulses is applied. We show the great advantage of the Uhrig sequence over the uniform sequence in saving entanglement. A very recent experiment \cite{Du} establishes the advantage of Uhrig's sequence in lengthening the dephasing time of a single qubit. The organization of the paper is as follows: In Sec 2 we introduce the microscopic model of dephasing and calculate the relevant physical quantities under the influence of the control pulses. In Sec 3 we show how the coherent state techniques can be used to obtain the dynamical results. In Sec 4 we calculate the dynamics of entanglement and present numerical results. In Sec 5 we conclude with possible generalizations of our results on entanglement. \section{Dynamical decay of entanglement under dephasing} Let us consider two qubits in an entangled state \cite{Yu} which in general could be a mixed state. In terms of the basis states for the two qubits, we choose the initial state as \begin{equation}\label{2} \begin{array}{lcl} |1\rangle=|\uparrow\rangle_{A}\otimes|\uparrow\rangle_{B},|2\rangle=|\uparrow\rangle_{A}\otimes|\downarrow\rangle_{B}, \\ |3\rangle=|\downarrow\rangle_{A}\otimes|\uparrow\rangle_{B},|4\rangle=|\downarrow\rangle_{A}\otimes|\downarrow\rangle_{B}. \end{array} \end{equation} \begin{equation}\label{3} \rho=\left( \begin{array}{cccc} a & 0 & 0 & 0 \\ 0 & b & z & 0 \\ 0 & z^{*} & c & 0 \\ 0 & 0 & 0 & d \\ \end{array} \right). \end{equation} The state (\ref{3}) is positive and normalized if $a+b+c+d=1$ and $bc>|z|^2$. This state has the structure of a Werner state. For $a=d=0$; $b=c=|z|=1$, it represents a maximally entangled state. The amount of entanglement in the state is given by the concurrence given by \begin{equation}\label{4} \begin{array}{lcl} C=\mbox{Max}\{0,\tilde{C}\}, \\ \tilde{C}=2\{|\rho_{23}|-\sqrt{\rho_{11}\rho_{44}}\}=2|z|(1-r), \\\displaystyle r=\frac{\sqrt{ad}}{|z|}. \end{array} \end{equation} And therefore the state is entangled as long as $|z|$ is greater than $\sqrt{ad}$. Now under dephasing the diagonal elements $a$ and $d$ do not change. However the coherence in the qubit decays as $\exp[-t/T_{2}]$ and therefore the entanglement survives as long as $|z|\exp[-2t/T_{2}]-\sqrt{ad}>0$ and thus entanglement vanishes if $\displaystyle t>\frac{T_{2}}{2}\ln{\frac{|z|}{\sqrt{ad}}}$. We would now examine how the action of pulses can protect the entanglement. We would calculate the time over which entanglement can be made to survive. For this purpose we need to examine the microscopic model of dephasing. We would make the reasonable assumption that each qubit interacts with its own bath. We could then examine the dynamics of the individual qubits and then obtain the evolution of the concurrence. On a microscopic scale the dephasing can be considered to arise from the interaction of the qubit with a bath of oscillators i.e. from the Hamiltonian \begin{equation}\label{5} \displaystyle H=\hbar\sum_{i}\omega_{i}a_{i}^{\dag}a_{i}+\hbar S_{z}\sum_{i}g_{i}(a_{i}+a_{i}^{\dag}), \end{equation} where the $S_{z}$ is the $z$ component of the spin operator for the qubit and the annihilation and creation operators $a_{i}$, $a_{i}^{\dag}$ represent the oscillators of the Bosonic bath. The bath is taken to have a broad spectrum. In particular for an Ohmic bath we take spectrum of bath as \begin{equation}\label{6} \displaystyle J\rightarrow\sum_{i}|g_{i}|^2\delta(\omega-\omega_{i})=2\alpha\omega\Theta(\omega_{D}-\omega), \end{equation} where $\omega_{D}$ is the cut off frequency. It essentially determines the correlation time of the bath. Such a bath leads to dephasing i.e. the spin polarization decays at the rate $T_{2}$. The dynamical decoupling schemes use a series of $\pi$ pulses applied at regular interval of times whereas Uhrig applies $\pi$ pulses at irregular intervals of time. Such nonuniformly spaced pulses are much more effective in controlling dephasing over a time interval determined by the cut off frequency and number of pulses. The regular sequence of pulses is more effective outside this domain. The pulses reverse the evolution given by the interaction part in the Hamiltonian (\ref{5}) since under a $\pi$ pulse the spin operator $S_{z}$ reverses sign. The regular pulse sequence and the Uhrig sequence are given by equation (\ref{1}). We need to calculate the dynamical evolution of the off diagonal element of the density matrix for the qubit. We work in the interaction picture hence the Hamiltonian (\ref{5}) becomes \begin{equation}\label{7} \displaystyle H=\hbar S_{z}\sum_{i}g_{i}(a_{i}e^{-i\omega_{i}t}+a_{i}^{\dag}e^{i\omega_{i}t})=\hbar S_{z}B(t), \end{equation} where $B(t)$ is the bath operator given by \begin{equation}\label{8} \displaystyle B(t)=\sum_{i}g_{i}(a_{i}e^{-i\omega_{i}t}+a_{i}^{\dag}e^{i\omega_{i}t}), \end{equation} It is easy to see that the off diagonal element of the single qubit density matrix $\sigma$ is \begin{equation}\label{9} \displaystyle \sigma_{\uparrow\downarrow}(t)=\Tr_{B}\langle \downarrow|U(t)\sigma_{B}\sigma(0)U^{\dag}(t)|\uparrow\rangle, \end{equation} where $\Tr_{B}$ is over the initial bath density matrix $\sigma_{B}$ and where \begin{equation}\label{10} \displaystyle U(t)=T\exp\{-i\int_{0}^{t}S_{z}B(\tau)d\tau\}. \end{equation} This can be simplified to \begin{equation}\label{11} \begin{array}{lcl} \sigma_{\uparrow\downarrow}(t)=\sigma_{\uparrow\downarrow}(0) \Tr_{B} V_{-}(t)\sigma_{B}V_{+}^{\dag}(t), \\\hspace{0.45in}=\sigma_{\uparrow\downarrow}(0)\langle V_{+}^{\dag}(t)V_{-}(t)\rangle, \end{array} \end{equation} where \begin{equation}\label{12} \displaystyle V_{\pm}(t)=T\exp\{\mp \frac{i}{2}\int_{0}^{t}B(\tau)d\tau\}. \end{equation} Thus we can write \begin{equation}\label{13} \sigma_{\uparrow\downarrow}(t)=\sigma_{\uparrow\downarrow}(0)\zeta(t), \end{equation} \begin{equation}\label{14} \begin{array}{lcl} \zeta(t)=\langle V_{+}^{\dag}(t)V_{-}(t)\rangle, \\\hspace{0.3in}=\langle T\exp\{i\int_{0}^{t}B(\tau)d\tau\}\rangle. \end{array} \end{equation} \begin{figure} \caption{ A sequence of $\pi$ pulses is applied at times $T_{j}$.} \label{Fig1} \end{figure} So far no approximation has been made. Now we incorporate the effect of pulses in the dynamical evolution of the single qubit coherence. Let us apply a sequence of $\pi$ pulses at times $T_{j}$ as shown in figure 1. At each $T_{j}$ the interaction Hamiltonian changes sign. This can be easily incorporated in the dynamics and the result is \begin{equation}\label{15} \begin{array}{lcl} \zeta(t)=\langle W(t)\rangle, \\ W(t)=T\exp\{i\int_{0}^{t}B(\tau)f(\tau)d\tau\}, \\ \displaystyle f(t)=\sum_{j=0}^{N-1}(-1)^{j}\theta(t-T_{j})\theta(T_{j+1}-t), \end{array} \end{equation} where the step function $\theta(t)=1$ if $t>0,=0$, if $t<0$. It is especially instructive to use coherent state techniques to simplify the expression for $W$. We do this in the next section. \section{Single Qubit Coherence $S(t)$} We now examine the calculation of the function $W(t)$. We note that the bath operator $B(\tau)$ is such that the commutator $[B(\tau_{1}),B(\tau_{2})]$ is a c-number. In such a case it has been shown by Glauber \cite{Glauber1} that the time ordering can be simplified. It can be shown that \begin{equation}\label{16} \fl W=\exp\{i\int_{0}^{t}B(\tau)d\tau f(\tau)\}\exp\{-\frac{1}{2}\int_{0}^{t}d\tau_{1}\int_{0}^{\tau_{1}}d\tau_{2}[B(\tau_{1}),B(\tau_{2})]f(\tau_{1})f(\tau_{2})\}. \end{equation} Since $B$ is a Hermitian operator, the last exponential is just a c-number phase factor and hence \begin{equation}\label{17} W=\exp(i\Phi(t))\exp\{i\int_{0}^{t}B(\tau)d\tau f(\tau)\}, \end{equation} \begin{equation}\label{18} \hspace{0.2in}=\exp(i\Phi(t))\Pi_{j}\exp\{if_{j}a_{j}+if_{j}^{*}a_{j}^{\dag}\}, \end{equation} where \begin{equation}\label{19} f_{j}=g_{j}\int_{0}^{t}e^{-i\omega_{j}\tau}f(\tau)d\tau. \end{equation} On using the Baker-Hausdorff identity (\ref{18}) can be further simplified to \begin{equation}\label{20} W=\exp(i\Phi(t))\Pi_{j}\exp\{if_{j}^{*}a_{j}^{\dag}\}\exp\{if_{j}a_{j}\}\exp\{-\frac{1}{2}|f_{j}|^2\}, \end{equation} The thermal expectation value of $W$ can be easily obtained using for example the P-representation for the thermal density matrix \cite{Glauber2} \begin{equation}\label{21} \begin{array}{lcl} \displaystyle \rho_{th j}=\frac{1}{\pi n_{j}}\int \exp\{-\frac{|\alpha|^2}{n_{j}}\}|\alpha\rangle\langle\alpha|d^{2}\alpha, \\\hspace{0.2in} \displaystyle n_{j}=\frac{1}{e^{\beta\hbar\omega_{j}}-1}. \end{array} \end{equation} Thus \begin{equation}\label{22} \fl W=\exp(i\Phi(t))\Pi_{j}\exp\{-\frac{1}{2}|f_{j}|^2\}\displaystyle\times\frac{1}{\pi n_{j}}\int \exp\{-\frac{|\alpha|^2}{n_{j}}\}\exp\{if_{j}^{*}\alpha^{*}+if_{j}\alpha\}d^{2}\alpha, \\ \end{equation} which on simplification reduces to \begin{equation}\label{23} W=\exp(i\Phi(t))\Pi_{j}\exp\{-(n_{j}+\frac{1}{2})|f_{j}|^2\}, \end{equation} On using the form of $f_{j}$ and on introducing the spectral density of the bath oscillators, the expression (\ref{23}) becomes \begin{equation}\label{24} W=\exp(i\Phi(t))\exp\{-\int d(\omega)J(\omega)[n(\omega)+\frac{1}{2}]|f(\omega)|^{2}\}, \end{equation} where now \begin{equation}\label{25} f(\omega)=\int_{0}^{t}e^{-i\omega\tau}F(\tau)d\tau. \end{equation} The function $f$ can be simplified using the explicit form of $F(\tau)$: \begin{equation}\label{26} \displaystyle f(\omega)=-i[1+(-1)^{N+1}e^{-i\omega t}+2\sum_{j=1}^{N}(-1)^{j}e^{-i\omega T_{j}}]. \end{equation} The result (\ref{23}) is equivalent to equation (\ref{8}) of Uhrig. We also note that results like (24) appear in the earlier literature \cite{Agarwal2} dealing especially with nonmarkovian master equations. \section{Saving Entanglement: Numerical Results} Since we work under the assumption that each qubit interacts with its own bath, the time dependent matrix elements of the density matrix in the basis (\ref{2}) can be obtained by noting that the diagonal elements do not evolve under dephasing. The off diagonal element $\rho_{23}(t)$ is given by \begin{equation}\label{27} \rho_{23}(t)=\rho_{23}(0)|\zeta(t)|^2, \end{equation} where $\zeta(t)$ is defined by equation (\ref{13}) and its explicit form is given by equation (\ref{23}). Thus \begin{equation}\label{28} |\zeta(t)|^2=S(t)=\exp\{-2\int d\omega J(\omega) (n(\omega)+\frac{1}{2})|f(\omega)|^{2}\}, \end{equation} It can then be shown that the time dependence of the concurrence is given by \begin{equation}\label{29} \begin{array}{lcl} C(t)=\mbox{Max}\{0,\tilde{C}(t)\}, \\ \displaystyle \tilde{C}(t)=2|z|\{S(t)-r\}, \\ \displaystyle r=\frac{\sqrt{ad}}{|z|}. \end{array} \end{equation} \begin{figure} \caption{ Signal vs time for $n=10$ at $T=0$. Solid lines for the optimized sequence, dotdashed lines for the equidistant sequence. From bottom to top the curves correspond to $\alpha=0.25,0.1,0.01,0.001$.} \label{Fig2} \end{figure} \begin{figure} \caption{ Signal vs time for $n=50$ at $T=0$. Solid lines for the optimized sequence, dotdashed lines for the equidistant sequence. From bottom to top the curves correspond to $\alpha=0.25,0.1,0.01,0.001$.} \label{Fig3} \end{figure} We next discuss the dynamical behavior of the entanglement. The function $f(\omega)$ has been evaluated by Uhrig. For the pulses applied at regular intervals and for $n$ even we have \begin{equation}\label{30} |f(\omega)|^2=4\tan^2[\omega t/(2n+2)]\cos^{2}(\omega t/2)/\omega^{2}\hspace{0.1in} \forall\hspace{0.1in} n\hspace{0.1in} \rm{even}, \end{equation} whereas for the Uhrig's pulse sequence \begin{equation}\label{31} |f(\omega)|^{2}\approx 16(n+1)^{2}J_{n+1}^{2}(\omega/2)/\omega^{2}, \end{equation} where $J_n$ is the Bessel function. The function $S(t)$ is shown in figures 2 and 3 for $n=10$ and $n=50$. The parameter $\omega_{D}^{-1}$ is a measure of the bath correlation time. These figures show that the entanglement lives much longer for Uhrig sequence of pulses applied at nonuniform intervals of time provided that $\omega_{D}t\leq2n$. Thus entanglement can be made to live over times which could be several orders longer than the coherence time of the bath. \section{Conclusions} In conclusion we have considered how the effects of dephasing on the destruction of entanglement can be considerably slowed on by applying the sequence of $\pi$ pulses applied at time intervals given by Uhrig. We demonstrated this explicitly for the case of a mixed entangled state of two qubits. The sequence given by Uhrig is far better in controlling the death of entanglement compared to the sequence applied at regular intervals of time. These conclusions also apply to the multiparticle entangled state like the GHZ state \begin{equation}\label{32} \displaystyle |\Psi\rangle=\frac{1}{\sqrt{2}}(|\uparrow\cdots\uparrow\rangle-|\downarrow\cdots\downarrow\rangle), \end{equation} whose entanglement under dephasing would decay as the density matrix at time $t$ would be \begin{equation}\label{33} \begin{array}{lcl} \displaystyle \rho(t)=\frac{1}{2}|\uparrow\cdots\uparrow\rangle\langle\uparrow\cdots\uparrow|+\frac{1}{2}|\downarrow\cdots\downarrow\rangle\langle\downarrow\cdots\downarrow| \\\hspace{0.45in}\displaystyle -\frac{1}{2}\exp\{-\frac{tN}{T_{2}}\}(|\uparrow\cdots\uparrow\rangle\langle\downarrow\cdots\downarrow|+c.c.). \end{array} \end{equation} Under the application of $\pi$ pulses, the prefactor $\displaystyle \exp\{-\frac{tN}{T_{2}}\}$ would be replaced by $(S(t))^{N/2}$. Since $S(t)$ can be made close to unity for times even orders of the correlation time of the bath, the entanglement of the multiparticle GHZ state would survive over a long time. Note further that Uhrig's work has been generalized to arbitrary relaxations \cite{Yang}. Clearly these generalizations should be applicable to the considerations of entanglement. In particular we hope to examine the protection of Werner state against different models of environment. Finally we note that our ongoing work also suggests how other methods like photonic crystal environment can be used to save entanglement. \Bibliography{99} \bibitem{Nielsen} Nielsen M A and Chuang I L 2004 {\it Quantum Computation and Quantum Information} (Cambridge University) \nonum Zurek W H 2003 {\it Rev. Mod. Phys.} {\bf 75} 715 \bibitem{Yu} Yu T and Eberly J H 2004 {\it Phys. Rev. Lett.} {\bf 93} 140404\nonum Yu T and Eberly J H 2006 {\it Phys. Rev. Lett.} {\bf 97} 140403 \bibitem{Viola} Viola L and Lloyd S 1998 {\it Phys. Rev. A} {\bf 58} 2733\nonum Viola L, Knill E, and Lloyd S 1999 {\it Phys. Rev. Lett.} {\bf 82} 2417 \bibitem{Ban} Ban M 1998 {\it J. Mod. Opt.} {\bf 45} 2315 \bibitem{Facchi} Facchi P, Tasaki S, Pascazio S, Nakazato H, Tokuse A, and Lidar D A 2005 {\it Phys. Rev. A} {\bf 71} 022302 \bibitem{Vitali} Vitali D and Tombesi P 1999 {\it Phys. Rev. A} {\bf 59} 4178 \bibitem{Agarwal1} Agarwal G S, Scully M O, and Walther H 2001 {\it Phys. Rev. Lett.} {\bf 86} 4271 \bibitem{Agarwal2} Agarwal G S 1999 {\it Phys. Rev. A} {\bf 61} 013809\nonum Kofman A G and Kurizki G 2001 {\it Phys. Rev. Lett.} {\bf 87} 270405\nonum Kofman A G and Kurizki G 2004 {\it Phys. Rev. Lett.} {\bf 93} 130406\nonum Linington I E and Garraway B M 2008 {\it Phys. Rev. A} {\bf 77} 033831\nonum Gordon G 2008 {\it Europhys. Lett.} {\bf 83} 30009 \bibitem{Palma} Palma G M, Suominen K, and Ekert A K 1996 {\it Proc. R. Soc. London A} {\bf 452} 567\nonum Duan L-M and Guo G-C 1997 {\it Phys. Rev. Lett.} {\bf 79} 1953\nonum Zanardi P and Rasetti M 1997 {\it Phys. Rev. Lett.} {\bf 79} 3306\nonum Lidar D A, Chuang I L, and Whaley K B 1998 {\it Phys. Rev. Lett.} {\bf 81} 2594\nonum Kwiat P G, Berglund A J, Altepeter J B, and White A G 2000 {\it Science} {\bf 290} 498\nonum Ollerenshaw J E, Lidar D A, and Kay L E 2003 {\it Phys. Rev. Lett.} {\bf 91} 217904 \bibitem{Kishimoto} Kishimoto T, Hasegawa A, Mitsumori Y, Ishi-Hayase J, Sasaki M, and Minami F 2006 {\it Phys. Rev. B} {\bf 74} 073202\nonum Minns R S, Kutteruf M R, Zaidi H, Ko L, and Jones R R 2006 {\it Phys. Rev. Lett.} {\bf 97} 040504\nonum Fraval E, Sellars M J, and Longdell J J 2005 {\it Phys. Rev. Lett.} {\bf 95} 030506\nonum Morton J J L, Tyryshkin A M, Ardavan A, Benjamin S C, Porfyrakis K, Lyon S A, Briggs G A D 2005 {\it Nature Phys.} {\bf 2} 40 \bibitem{Uhrig} Uhrig G S 2007 {\it Phys. Rev. Lett.} {\bf 98} 100504\nonum Khodjasteh K and Lidar D A 2005 {\it Phys. Rev. Lett.} {\bf 95} 180501 \bibitem{Yang} Yang W and Liu R B 2008 {\it Phys. Rev. Lett.} {\bf 101} 180403 \bibitem{GSUhrig} Uhrig G S 2008 {\it New J. Phys.} {\bf 10} 083024 \bibitem{Lee} Lee B, Witzel W M, and Sarma S D 2008 {\it Phys. Rev. Lett.} {\bf 100} 160505 \bibitem{Wootters} Wootters W K 1998 {\it Phys. Rev. Lett.} {\bf 80} 2245 \bibitem{Du} Du J, Rong X, Zhao N, Wang Y, Yang J, and Liu R B 2009 {\it Nature} {\bf 461} 1265 \bibitem{Glauber1} Glauber R J 1965 {\it in Quantum Optics and Electronics}, eds C. DeWitt, A. Blandin and C. Cohen-Tannoudji (Gorden and Breach, Newyork) p~132 \bibitem{Glauber2} Glauber R J 1963 {\it Phys. Rev. Lett.} {\bf 10} 84\nonum Sudarshan E C 1963 {\it Phys. Rev. Lett.} {\bf 10} 277 \endbib \end{document}
arXiv
# Linear systems and state-space representation State-space representation is a mathematical modeling technique that represents a dynamic system as a set of first-order differential equations. It is widely used in control theory and optimal control design to analyze and design control systems. In state-space representation, a dynamic system is described by the following equations: $$ \dot{x} = Ax + Bu \\ y = Cx + Du $$ where $x$ is the state vector, $u$ is the input vector, $y$ is the output vector, $A$ is the state matrix, $B$ is the input matrix, $C$ is the output matrix, and $D$ is the feedthrough matrix. Consider the following state-space representation of a simple mass-spring-damper system: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$ is the state vector, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. State-space representation has several advantages: - It simplifies the analysis of dynamic systems by reducing them to algebraic equations. - It provides a unified framework for analyzing both continuous-time and discrete-time systems. - It facilitates the design of control systems using feedback control techniques. ## Exercise Derive the state-space representation for the following system: $$ \dot{x} = -2x + u \\ y = x^2 $$ # Hamiltonian functions and their role in optimal control Hamiltonian functions play a crucial role in optimal control theory. They provide a unified framework for analyzing and designing control systems. A Hamiltonian function $H(x, u)$ is defined as the difference between the kinetic energy $T(x)$ and the potential energy $V(x)$ of a system: $$ H(x, u) = T(x) - V(x) $$ The Hamiltonian function has several important properties: - It is a conserved quantity, meaning that its time derivative is zero: $$ \dot{H} = \frac{\partial H}{\partial x} \dot{x} + \frac{\partial H}{\partial u} \dot{u} = 0 $$ - It is used to construct the Lagrangian function, which is the foundation of classical mechanics and optimal control theory. Consider a simple pendulum system with a mass $m$ and length $l$. The kinetic energy $T(x)$ is given by the angular velocity $\dot{x}$: $$ T(x) = \frac{1}{2} m l^2 \dot{x}^2 $$ The potential energy $V(x)$ is given by the angle $x$: $$ V(x) = m g l (1 - \cos x) $$ where $g$ is the gravitational constant. The Hamiltonian function for this system is: $$ H(x, \dot{x}) = \frac{1}{2} m l^2 \dot{x}^2 + m g l (1 - \cos x) $$ Hamiltonian functions are used to formulate optimal control problems and design control systems using the principle of least action. ## Exercise Derive the Hamiltonian function for the following system: $$ \dot{x} = -2x + u \\ y = x^2 $$ # The LQR algorithm and its derivation The Linear Quadratic Regulator (LQR) algorithm is a powerful method for designing optimal control systems. It is based on the principle of minimizing the quadratic cost function subject to the linear system dynamics. The LQR algorithm consists of the following steps: 1. Define the cost function $J(u, x)$ as the sum of the quadratic terms in the control input $u$ and the state $x$: $$ J(u, x) = \frac{1}{2} u^T R u + \frac{1}{2} x^T Q x $$ where $R$ and $Q$ are positive semidefinite matrices. 2. Compute the optimal control input $u^*(x)$ by minimizing the cost function with respect to the control input: $$ u^*(x) = -R^{-1} B^T P(x) $$ where $B^T$ is the transpose of the input matrix $B$ and $P(x)$ is the solution to the Riccati equation: $$ A^T P(x) + P(x) A - P(x) B R^{-1} B^T P(x) + Q(x) = 0 $$ 3. Compute the optimal state trajectory $x^*(t)$ by simulating the system forward using the optimal control input: $$ x^*(t) = x(t) + \int_t^T A x^*(t) + Bu^*(x(t)) dt $$ 4. Repeat steps 2 and 3 until convergence is achieved. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ The LQR algorithm can be used to design an optimal control system for this problem. The LQR algorithm has several advantages: - It provides a systematic approach to optimal control design. - It is applicable to a wide range of dynamic systems. - It can be used to design robust and adaptive control systems. ## Exercise Derive the LQR solution for the following system: $$ \dot{x} = -2x + u \\ y = x^2 $$ # Formulation of the LQR problem as a quadratic programming problem The LQR problem can be formulated as a quadratic programming (QP) problem. This formulation provides a more general framework for solving the LQR problem and allows for the inclusion of additional constraints and objectives. The QP problem is defined as follows: $$ \min_{u, x} J(u, x) \\ \text{subject to} \\ Ax + Bu \le c \\ x \ge 0 $$ where $J(u, x)$ is the cost function, $A$ and $B$ are matrices, $c$ is a vector, and $x$ and $u$ are the state and control variables, respectively. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ The LQR problem can be formulated as a QP problem as follows: $$ \min_{u, x} \frac{1}{2} u^2 + \frac{1}{2} x^2 \\ \text{subject to} \\ \begin{bmatrix} -k/m & 1/m \\ 1/m & 0 \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \le 0 \\ x \ge 0 $$ The QP formulation of the LQR problem has several advantages: - It provides a more general framework for solving the LQR problem. - It allows for the inclusion of additional constraints and objectives. - It can be solved using a wide range of numerical optimization algorithms. ## Exercise Formulate the following LQR problem as a QP problem: $$ \dot{x} = -2x + u \\ y = x^2 $$ # Solving the LQR problem using numerical methods The LQR problem can be solved using a variety of numerical optimization algorithms, such as gradient descent, Newton's method, or interior point methods. One popular approach is to use the augmented Lagrangian method, which combines the Lagrangian function and the penalty term to create a new objective function. The augmented Lagrangian method is particularly effective for solving convex optimization problems, such as the LQR problem. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ The LQR problem can be solved using the augmented Lagrangian method. The augmented Lagrangian method has several advantages: - It is a powerful and general framework for solving convex optimization problems. - It is computationally efficient and can be implemented using a variety of numerical optimization algorithms. - It can be easily extended to handle nonlinear constraints and objectives. ## Exercise Solve the following LQR problem using the augmented Lagrangian method: $$ \dot{x} = -2x + u \\ y = x^2 $$ # Stability and convergence properties of the LQR solution The stability and convergence properties of the LQR solution are crucial for the practical implementation of control systems. The LQR solution is stable if the closed-loop system is asymptotically stable. This means that the closed-loop system converges to a steady state as time approaches infinity. The convergence of the LQR solution depends on the properties of the cost function, the system dynamics, and the initial state. In general, the LQR solution converges to the optimal trajectory if the cost function is strictly convex and the system dynamics are well-behaved. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ The stability and convergence properties of the LQR solution can be analyzed by examining the closed-loop system and the properties of the cost function. The stability and convergence properties of the LQR solution have several implications: - The stability and convergence of the LQR solution are crucial for the practical implementation of control systems. - The stability and convergence properties depend on the properties of the cost function and the system dynamics. - The design of control systems using LQR theory requires careful consideration of these properties. ## Exercise Analyze the stability and convergence properties of the LQR solution for the following LQR problem: $$ \dot{x} = -2x + u \\ y = x^2 $$ # Applications of LQR theory in various engineering fields LQR theory has found wide applications in various engineering fields, including aerospace, robotics, automotive, and biomedical engineering. Some examples of LQR applications include: - Designing optimal control systems for spacecraft trajectory tracking. - Designing adaptive control systems for autonomous robots. - Designing control systems for vehicle stability and performance. - Designing optimal control systems for medical devices, such as insulin pumps. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ The LQR solution can be used to design an optimal control system for this problem in various engineering fields. The applications of LQR theory in various engineering fields demonstrate its versatility and importance in control system design. ## Exercise Describe an application of LQR theory in a specific engineering field. # Case studies and practical examples Case studies and practical examples are essential for understanding the practical applications of LQR theory in control system design. Some examples of case studies and practical examples include: - Designing an optimal control system for a quadcopter to achieve precise position and attitude control. - Designing an adaptive control system for a robotic arm to achieve precise position control under varying external disturbances. - Designing an optimal control system for a vehicle to achieve smooth and energy-efficient trajectory tracking. - Designing an optimal control system for a medical device to achieve precise and reliable insulin delivery. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ A case study could involve designing an optimal control system for this problem in the context of a robotic arm. Case studies and practical examples provide valuable insights into the practical applications of LQR theory and help to understand the challenges and limitations of the theory. ## Exercise Describe a case study or practical example of LQR theory in control system design. # Extensions and generalizations of LQR theory LQR theory has been extended and generalized to address various challenges and limitations in control system design. Some extensions and generalizations of LQR theory include: - Extending LQR theory to handle nonlinear systems and nonlinear cost functions. - Generalizing LQR theory to handle uncertainty and robustness in control system design. - Extending LQR theory to handle multiple objectives and constraints. - Generalizing LQR theory to handle hierarchical and distributed control systems. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ An extension of LQR theory could involve designing an optimal control system for this problem using nonlinear control techniques. The extensions and generalizations of LQR theory provide valuable insights into the challenges and limitations of the theory and help to improve the practical applicability of LQR theory in control system design. ## Exercise Describe an extension or generalization of LQR theory that addresses a specific challenge or limitation in control system design. # The role of LQR in modern control theory LQR theory plays a crucial role in modern control theory. It provides a unified framework for analyzing and designing optimal control systems. LQR theory has several implications for modern control theory: - It provides a powerful method for designing optimal control systems. - It is applicable to a wide range of dynamic systems, including linear and nonlinear systems. - It can be extended and generalized to address various challenges and limitations in control system design. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ The LQR solution can be used to design an optimal control system for this problem in various engineering fields. The role of LQR in modern control theory highlights its importance and relevance to the field of control theory. ## Exercise Discuss the role of LQR theory in modern control theory and its implications for the practical implementation of control systems. # Challenges and future directions in LQR research LQR research faces several challenges and has several future directions: - Addressing the limitations of LQR theory, such as its inability to handle nonlinear systems or nonlinear cost functions. - Developing new algorithms and techniques for solving the LQR problem, such as using deep learning or reinforcement learning. - Extending LQR theory to handle uncertainty and robustness in control system design. - Generalizing LQR theory to handle multiple objectives and constraints. - Applying LQR theory to new domains, such as biological systems or social systems. Consider the following LQR problem: $$ \dot{x} = \begin{bmatrix} -k/m \\ 1/m \end{bmatrix} x + \begin{bmatrix} 1/m \\ 0 \end{bmatrix} u \\ y = \begin{bmatrix} 1 & 0 \end{bmatrix} x $$ where $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, $u$ is the input force, $y$ is the position of the mass, and $k$ and $m$ are the stiffness and mass constants, respectively. The cost function is given by: $$ J(u, x) = \frac{1}{2} u^2 + \frac{1}{2} x^2 $$ A challenge in LQR research could involve extending LQR theory to handle nonlinear systems or nonlinear cost functions. The challenges and future directions in LQR research highlight the ongoing importance and relevance of LQR theory to the field of control theory. ## Exercise Discuss a challenge or future direction in LQR research and its potential implications for the field of control theory.
Textbooks
Radon–Riesz property The Radon–Riesz property is a mathematical property for normed spaces that helps ensure convergence in norm. Given two assumptions (essentially weak convergence and continuity of norm), we would like to ensure convergence in the norm topology. Definition Suppose that (X, ||·||) is a normed space. We say that X has the Radon–Riesz property (or that X is a Radon–Riesz space) if whenever $(x_{n})$ is a sequence in the space and $x$ is a member of X such that $(x_{n})$ converges weakly to $x$ and $\lim _{n\to \infty }\Vert x_{n}\Vert =\Vert x\Vert $, then $(x_{n})$ converges to $x$ in norm; that is, $\lim _{n\to \infty }\Vert x_{n}-x\Vert =0$. Other names Although it would appear that Johann Radon was one of the first to make significant use of this property in 1913, M. I. Kadets and V. L. Klee also used versions of the Radon–Riesz property to make advancements in Banach space theory in the late 1920s. It is common for the Radon–Riesz property to also be referred to as the Kadets–Klee property or property (H). According to Robert Megginson, the letter H does not stand for anything. It was simply referred to as property (H) in a list of properties for normed spaces that starts with (A) and ends with (H). This list was given by K. Fan and I. Glicksberg (Observe that the definition of (H) given by Fan and Glicksberg includes additionally the rotundity of the norm, so it does not coincide with the Radon-Riesz property itself). The "Riesz" part of the name refers to Frigyes Riesz. He also made some use of this property in the 1920s. It is important to know that the name "Kadets-Klee property" is used sometimes to speak about the coincidence of the weak topologies and norm topologies in the unit sphere of the normed space. Examples 1. Every real Hilbert space is a Radon–Riesz space. Indeed, suppose that H is a real Hilbert space and that $(x_{n})$ is a sequence in H converging weakly to a member $x$ of H. Using the two assumptions on the sequence and the fact that $\langle x_{n}-x,x_{n}-x\rangle =\langle x_{n},x_{n}\rangle -\langle x_{n},x\rangle -\langle x,x_{n}\rangle +\langle x,x\rangle ,$ and letting n tend to infinity, we see that $\lim _{n\to \infty }{\langle x_{n}-x,x_{n}-x\rangle }=0.$ Thus H is a Radon–Riesz space. 2. Every uniformly convex Banach space is a Radon-Riesz space. See Section 3.7 of Haim Brezis' Functional analysis. See also • Johann Radon • Frigyes Riesz • Hilbert space or Banach space theory • Weak topology • Normed space • Functional analysis • Schur's property References • Megginson, Robert E. (1998), An Introduction to Banach Space Theory, New York Berlin Heidelberg: Springer-Verlag, ISBN 0-387-98431-3
Wikipedia
uu.seUppsala University Publications CiteExportLink to result list http://uu.diva-portal.org/smash/resultList.jsf?query=&language=en&searchType=SIMPLE&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all&aq=%5B%5B%7B%22journalId%22%3A%227598%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&af=%5B%5D apaieeemodern-language-associationvancouverOther style modern-language-association Rows per page Standard (Relevance)Author A-ÖAuthor Ö-ATitle A-ÖTitle Ö-APublication type A-ÖPublication type Ö-AIssued (Oldest first)Issued (Newest first)Created (Oldest first)Created (Newest first)Last updated (Oldest first)Last updated (Newest first)Disputation date (earliest first)Disputation date (latest first) Standard (Relevance) Author A-Ö Author Ö-A Title A-Ö Title Ö-A Publication type A-Ö Publication type Ö-A Issued (Oldest first) Issued (Newest first) Created (Oldest first) Created (Newest first) Last updated (Oldest first) Last updated (Newest first) Disputation date (earliest first) Disputation date (latest first) all on this page 250 onwards The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function. 1. Andersen, Henning Haahr Mazorchuk, Volodymyr Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Algebra and Geometry. Category O for quantum groups2015In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 17, no 2, p. 405-431Article in journal (Refereed) We study the BGG-categories O-q associated to quantum groups. We prove that many properties of the ordinary BGG-category O for a semisimple complex Lie algebra carry over to the quantum case. Of particular interest is the case when q is a complex root of unity. Here we prove a tensor decomposition for simple modules, projective modules, and indecomposable tilting modules. Using the known Kazhdan-Lusztig conjectures for O and for finite-dimensional U-q-modules we are able to determine all irreducible characters as well as the characters of all indecomposable tilting modules in O-q. As a consequence, we also recover the known result that the generic quantum case behaves like the classical category O. Auscher, Pascal Univ. Paris-Sud, CNRS, Universit´e Paris-Saclay. Egert, Moritz Nyström, Kaj Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory. L2 well-posedness of boundary value problems and the Kato square root problem for parabolic systems with measurable coefficients2016In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863Article in journal (Refereed) We introduce a first order strategy to study boundary value problems of parabolic systems with second order elliptic part in the upper half-space. This involves a parabolic Dirac operator at the boundary. We allow for measurable time dependence and some transversal dependence in the coefficients. We obtain layer potential representations for solutions in some classes and prove new well-posedness and perturbation results. As a byproduct, we prove for the first time a Kato estimate for the square root of parabolic operators with time dependent coefficients. This considerably extends prior results obtained by one of us under time and transversal independence. A major difficulty compared to a similar treatment of elliptic equations is the presence of non-local fractional derivatives in time. Avelin, Benny Gianazza, Ugo Dipartimento di Matematica "F. Casorati", Università di Pavia. Salsa, Sandro Dipartimento di Matematica "F. Brioschi", Politecnico di Milano. Boundary Estimates for Certain Degenerate and Singular Parabolic Equations2016In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 18, no 2, p. 381-424Article in journal (Refereed) We study the boundary behavior of non-negative solutions to a class of degenerate/singular parabolic equations, whose prototype is the parabolic p-Laplace equation. Assuming that such solutions continuously vanish on some distinguished part of the lateral part S-T of a Lipschitz cylinder, we prove Carleson-type estimates, and deduce some consequences under additional assumptions on the equation or the domain. We then prove analogous estimates for non-negative solutions to a class of degenerate/singular parabolic equations of porous medium type. Azzam, Jonas University of Washington, Seattle, USA. Hofmann, Steve University of Missouri, Columbia, USA. Martell, Jose Maria Instituto de Ciencias Matematicas, Madrid, Spain. Toro, Tatiana A new characterization of chord-arc domains2017In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 19, no 4, p. 967-981Article in journal (Refereed) We show that if Ω⊂Rn+1, n≥1, is a uniform domain (also known as a 1-sided NTA domain), i.e., a domain which enjoys interior Corkscrew and Harnack Chain conditions, then uniform rectifiability of the boundary of Ω implies the existence of exterior corkscrew points at all scales, so that in fact, Ω is a chord-arc domain, i.e., a domain with an Ahlfors-David regular boundary which satisfies both interior and exterior corkscrew conditions, and an interior Harnack chain condition. We discuss some implications of this result for theorems of F. and M. Riesz type, and for certain free boundary problems. Ekholm, Tobias Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics. Rational symplectic field theory over Z_2 for exact Lagrangian cobordisms2008In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 10, no 3, p. 641-704Article in journal (Refereed) We construct a version of rational symplectic field theory for pairs (X, L), where X is an exact symplectic manifold, where L ⊂ X is an exact Lagrangian submanifold with components subdivided into k subsets, and where both X and L have cylindrical ends. The theory associates to (X, L) a Z-graded chain complex of vector spaces over Z_2 , filtered with k filtration levels. The corresponding k -level spectral sequence is invariant under deformations of (X, L) and has the following property: if (X, L) is obtained by joining a negative end of a pair (X, L) to a positive end of a pair (X, L), then there are natural morphisms from the spectral sequences of (X, L) and of (X ,L) to the spectral sequence of (X, L). As an application, we show that if \Lambda ⊂ Y is a Legendrian submanifold of a contact manifold then the spectral sequences associated to (Y × R, \Lambda_s × R), where Y × R is the symplectization of Y and where \Lambda_s ⊂ Y is the Legendrian submanifold consisting of s parallel copies of \Lambda subdivided into k subsets, give Legendrian isotopy invariants of \Lambda. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Algebra and Geometry. Inst Mittag Leffler, Aurav 17, S-18260 Djursholm, Sweden. Honda, Ko Univ So Calif, Los Angeles, CA 90089 USA. Kalman, Tamas Tokyo Inst Technol, Meguro Ku, Tokyo 1528551, Japan. Legendrian knots and exact Lagrangian cobordisms2016In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 18, no 11, p. 2627-2689Article in journal (Refereed) We introduce constructions of exact Lagrangian cobordisms with cylindrical Legendrian ends and study their invariants which arise from Symplectic Field Theory. A pair (X, L) consisting of an exact symplectic manifold X and an exact Lagrangian cobordism L subset of X which agrees with cylinders over Legendrian links Lambda(+) and Lambda (-) at the positive and negative ends induces a differential graded algebra (DGA) map from the Legendrian contact homology DGA of Lambda(+) to that of Lambda (-) .We give a gradient flow tree description of the DGA maps for certain pairs (X, L), which in turn yields a purely combinatorial description of the cobordism map for elementary cobordisms, i.e., cobordisms that correspond to certain local modifications of Legendrian knots. As an application, we find exact Lagrangian surfaces that fill a fixed Legendrian link and are not isotopic through exact Lagrangian surfaces. Johansson, Anders Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Applied Mathematics. Öberg, Anders Pollicott, Mark University of Warwick, Coventry, England. Unique Bernoulli g-measures2012In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 14, no 5, p. 1599-1615Article in journal (Refereed) We improve and subsume the conditions of Johansson and O¨ berg [18] and Berbee [2]for uniqueness of a g-measure, i.e., a stationary distribution for chains with complete connections.In addition, we prove that these unique g-measures have Bernoulli natural extensions. In particular,we obtain a unique g-measure that has the Bernoulli property for the full shift on finitely manystates under any one of the following additional assumptions. (1)P1n=1(varn log g)2 < 1,(2) For any fixed ✏ > 0,P1n=1 e−(1/2+✏)(var1 log g+···+varn log g) = 1,(3) varn log g = o(1/pn) as n!1. That the measure is Bernoulli in the case of (1) is new. In (2) we have an improved version ofBerbee's [2] condition (concerning uniqueness and Bernoullicity), allowing the variations of log gto be essentially twice as large. Finally, (3) is an example that our main result is new both foruniqueness and for the Bernoulli property.We also conclude that we have convergence in the Wasserstein metric of the iterates of theadjoint transfer operator to the g-measure. Lewis, John L. University of Kentucky, Lexington, KY, USA. Quasi-linear PDEs and low-dimensional sets2018In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 20, no 7, p. 1689-1746Article in journal (Refereed) In this paper we establish new results concerning boundary Harnack inequalities and the Martin boundary problem, for non-negative solutions to equations of $p$-Laplace type with variable coefficients. The key novelty is that we consider solutions which vanish only on a low-dimensional set $\Sigma$ in $\mathbb R^n$ and this is different compared to the more traditional setting of boundary value problems set in the geometrical situation of a bounded domain in $\mathbb R^n$ having a boundary with (Hausdorff) dimension in the range $[n-1,n)$. We establish our quantitative and scale-invariant estimates in the context of low-dimensional Reifenberg flat sets. 9. Lewis, John L Vogel, Andrew On the dimension of p-harmonic measure in space2013In: Journal of the European Mathematical Society (Print), ISSN 1435-9855, E-ISSN 1435-9863, Vol. 15, no 6, p. 2197-2256Article in journal (Refereed) Let Omega subset of R-n, n >= 3, and let p, 1 < p < infinity, p not equal D 2, be given. In this paper we study the dimension of p-harmonic measures that arise from nonnegative solutions to the p-Laplace equation, vanishing on a portion of partial derivative Omega, in the setting of delta-Reifenberg flat domains. We prove, for p >= n, that there exists (delta) over tilde = (delta) over tilde (p, n) > 0 small such that if Omega is a delta-Reifenberg flat domain with delta < <(delta)over tilde>, then p-harmonic measure is concentrated on a set of sigma-finite Hn-1-measure. We prove, for p >= n, that for sufficiently flat Wolff snowflakes the Hausdorff dimension of p-harmonic measure is always less than n - 1. We also prove that if 2 < p < n, then there exist Wolff snowflakes such that the Hausdorff dimension of p-harmonic measure is less than n - 1, while if 1 < p < 2, then there exist Wolff snowflakes such that the Hausdorff dimension of p-harmonic measure is larger than n - 1. Furthermore, perturbing off the case p = 2; we derive estimates for the Hausdorff dimension of p-harmonic measure when p is near 2. KajN3 Search and link in DiVA
CommonCrawl
\begin{document} \title[General Central Limit Theorems]{General Central Limit Theorems for Associated Sequences} \author{$^{(1,2)}$ Harouna SANGARE} \author{$^{(1,3,4)}$ Gane Samb LO} \email[Harouna SANGARE]{[email protected]} \email[Gane Samb LO]{[email protected]} \begin{abstract} In this paper, we provide general central limit theorems (\textit{CLT}'s) for associated random variables (\textit{rv}'s) following the approaches used by Newman (1980) and Olivera et al.(2012). Given some assumptions, a Lyapounov-Feller-Levy type theorem is stated. We next specify different particular \textit{CLT} versions of associated sequences based on moment conditions. A comparison study with available \textit{CTL}'s is performed. As a by-product, we complete an important available theorem where an assumption was missing.\\ \noindent $^{(1)}$ LERSTAD, Gaston Berger University, Saint-Louis, Senegal.\newline \noindent $^{(2)}$ DER de Math\'ematiques et d'Informatique, USTTB, Bamako, Mali.\newline \noindent $^{(3)}$ AUST - African University of Sciences and Technology, Abuja, Nigeria\newline $^{(4)}$ LSTA, Pierre and Marie Curie University, Paris VI, France.\newline \noindent \textit{Corresponding author}. Gane Samb Lo. Email : [email protected]. Postal address : 1178 Evanston Dr NW T3P 0J9, Calgary, Alberta, Canada. \end{abstract} \keywords{Central limit theorems, weak convergence, Associated sequences, asymptotic Statistics Stationary sequences} \subjclass[2010]{60FXX, 60F05, 60G11} \maketitle \Large \section{Introduction} \label{sec1} \noindent We consider the problem of the central limit theorem for associated sequences. This problem goes back to Newman \cite{newman80}. Since then, a number of $CLT$'s, and strong laws of large numbers (\textit{SLLN} 's) or weak laws of large numbers (\textit{WLLN}'s), and invariance principles and laws of the iterated logarithm (\textit{LIL}'s) have been provided in the recent literature by different authors. Dabrowski and co-authors (see \cite{burton} and \cite{dabro}) considered weakly associated random variables to establish invariance principles in the lines of Newman and Wright \cite{newmanwright}, as well as Berry-Essen-type results and functional \textit{LIL}'s. The weak convergence for empirical processes using associated sequences has been discussed by Louhichi \cite{louhichi} and Yu \cite{yu93}.\newline \noindent The most general \textit{CLT}'s seem to be the one provided by Cox and Grimmet \cite{cox} for arbitrary associated \textit{rv}'s fulfilling a number of moment conditions and those given by Oliveira \cite{paulo}.\newline \noindent This question arises in the active research field on the concept of association and its application in many sciences, especially in percolation theory in Physics and in Reliability. The books by Rao \cite {rao} and the monograph by Oliveira \cite{paulo} present a review of these researches. The book of Bulinski and Shashkin \cite{bulinski2007} treats random associated sequences and intensively uses properties of regularly varying functions and provides \textit{CTL}'s, \textit{LLN}'s, \textit{LIL} 's and invariance principles.\newline \noindent Although many results concerning the \textit{CLT} problem are available for such sequences, there are still a number of open problems, especially regarding nonstationary sequences. \noindent Here, we intend to provide more general \textit{CLT}'s for arbitrary associated sequences. Precisely, here, we want to use all the power of the Newman method and express the conditions in the most general frame based on moment conditions so that any other result might be derived from them. In such a way, a Lyapounov-Feller-Levy type Theorem will be possible to be stated given some general assumptions. From this approach, more general \textit{CLT}'s may be conceived only by turning back the Newman's method.\\ \noindent The paper is organized as follows. Since association is the central notion used here, we first make a quick reminder of it in Section \ref{sec2}. In Section \ref{sec3}, we make a round up of CLT's available in the literature with the aim of comparing them to our findings. In Section \ref{sec4}, we state our general \textit{CLT} versions for arbitrary associated \textit{rv}'s and give comparison study and concluding remarks. \section{A brief Reminder of Association} \label{sec2} \label{sec2}\noindent The concept of association has been introduced by Lehmann (1966) \cite{lehmann} in the bivariate case, and extended later in the multivariate case by Esary, Proschan and Walkup (1967) \cite{esary}. \noindent The concept of association for random variables generalizes that of independence and seems to model a great variety of stochastic models. \newline \noindent This property also arises in Physics, and is quoted under the name of FKG property (Fortuin, Kastelyn and Ginibre (1971) \cite{fortuin}), in percolation theory and even in Finance (see \cite{jiazhu}).\newline \noindent The definite definition is given by Esary, Proschan and Walkup (1967) \cite{esary} as follows. \begin{definition} A finite sequence of rv's $(X_{1},...,X_{n})$ is associated if, for any couple of real and coordinate-wise non-decreasing functions $h$ and $g$ defined on $\mathbb{R}^{n}$, we have \begin{equation} Cov(h(X_{1},...,X_{n}),\ \ g(X_{1},...,X_{n}))\geq 0, \label{asso} \end{equation} \noindent whenever the covariance exists. An infinite sequence of rv's are associated whenever all its finite subsequences are associated. \end{definition} \noindent We have a few number of interesting properties to be found in (\cite{rao}) :\\ \noindent \textbf{(P1)} A sequence of independent rv's is associated.\newline \noindent \textbf{(P2)} Partial sums of associated rv's are associated. \newline \noindent \textbf{(P3)} Order statistics of independent rv's are associated. \newline \noindent \textbf{(P4)} Non-decreasing functions and non-increasing functions of associated variables are associated.\newline \noindent \textbf{(P5)} Let the sequence $Z_{1},Z_{2},...,Z_{n}$ be associated and let $(a_{i})_{1\leq i\leq n}$ be positive numbers and $ (b_{i})_{1\leq i\leq n}$ real numbers. Then the \textit{rv}'s $ a_{i}(Z_{i}-b_{i})$ are associated.\newline \noindent As immediate other examples of associated sequences, we may cite Gaussian random vectors with nonnegatively correlated components (see \cite {pitt}) and a homogeneous Markov chain (see \cite{daley}).\newline \noindent Demimartingales are set from associated centered variables exactly as martingales are derived from partial sums of centered independent random variables. We have \begin{definition} A sequence of rv's $\{S_{n},n\geq 1\}$ \ in $L^{1}(\Omega ,\mathcal{A}, \mathbb{P})$ \ is a demimartingale when for any $j\geq 1$, for any coordinatewise nondecreasing function $g$ \ defined on $\mathbb{R}^{j}$, we have \begin{equation} \mathbb{E}\left( (S_{j+1}-S_{j})\ g(S_{1},...,S_{j})\right) \geq 0,\ \ j\geq 1. \label{defmarting} \end{equation} \end{definition} \noindent Two particular cases should be highlighted. First any martingale is a demimartingale. Secondly, partial sums $S_{0}=0$, $ S_{n}=X_{1}+...+X_{n}$, $n\geq 1$, of associated and centered random variables $X_{1},X_{2},...$ are demimartingales. In this case, (\ref {defmarting}) becomes : \begin{equation*} \mathbb{E}\left( (S_{j+1}-S_{j})\ g(S_{1},...,S_{j})\right) =\mathbb{E} \left( X_{j+1}\ g(S_{1},...,S_{j})\right) =Cov\left( X_{j+1},g(S_{1},...,S_{j})\right) , \end{equation*} \noindent since $\mathbb{E}X_{j+1}=0$. Since $(x_{1},...,x_{j+1})\longmapsto x_{j+1}$ and $(x_{1},...,x_{j+1})\longmapsto g(x_{1},...,x_{j})$ are coordinate-wise nondecreasing functions and since the $X_{1},X_{2},..$ are associated, we get \begin{equation*} \mathbb{E}\left( (S_{j+1}-S_{j})\ g(S_{1},...,S_{j})\right) =Cov\left( X_{j+1}\ g(S_{1},...,S_{j})\right) \geq 0. \end{equation*} \noindent Finally, we present the following key results for associated sequences that one can find in almost any paper on that topic and that we need for our proofs. A detailed\ review on these results is given in \cite{gslo}. \begin{lemma}[Hoeffding (1940) see \cite{rao}] \label{lemg1} Let $(X,Y)$ be a bivariate random vector such that $\mathbb{E} (X^{2})<+\infty $ and $\mathbb{E}(Y^{2})<+\infty .$ If $\left( X_{1},Y_{1}\right) $ and $\left( X_{2},Y_{2}\right) $ are two independent copies of $(X,Y),$ then we have \begin{equation*} 2Cov(X,Y)=\mathbb{E}(X_{1}-X_{2})(Y_{1}-Y_{2}). \end{equation*} \noindent We also have \begin{equation*} Cov(X,Y)=\int_{-\infty }^{+\infty }\int_{-\infty }^{+\infty }H(x,y)dxdy, \end{equation*} \end{lemma} \noindent where \begin{equation*} H(x,y)=\mathbb{P}(X>x,Y>y)-\mathbb{P}(X>x)\mathbb{P}(Y>y). \end{equation*} \begin{lemma}[Newman (1980), see \cite{newman80}] \label{lemg2} Suppose that $X$, $Y$ are two random variables with finite variance and, $f$ and $g$ are $\mathbb{C} ^{1}$ complex valued functions on $ \mathbb{R}^{1}$ with bounded derivatives $f^{\prime }$ and $g^{\prime }.$ Then \begin{equation*} |Cov(f(X),g(Y))|\leq ||f^{\prime }||_{\infty }||g^{\prime }||_{\infty }Cov(X,Y). \end{equation*} \end{lemma} \noindent The following lemma is the most used tool in this field. \begin{lemma}[Newman and Wright (1981) Theorem, see \protect\cite {newmanwright}] \label{lemg3} Let $X_{1},X_{2},...,X_{n}$ be associated, then we have for all $t=(t_{1},...,t_{n})\in \mathbb{R}^{k}$, \begin{equation} \left\vert \psi _{_{(X_{1},X_{2},...,X_{n})}}(t)-\prod\limits_{i=1}^{n}\psi _{_{X_{i}}}(t_{i})\right\vert \leq \frac{1}{2}\sum_{1\leq i\neq j\leq n}\left\vert t_{i}t_{j}\right\vert Cov(X_{i},X_{j}). \label{decomp} \end{equation} \end{lemma} \noindent Before we proceed any further, let us make a round up of \textit{ CLT}'s for associated sequences in stationary and non-stationary cases. \section{Central limit theorem for associated sequences} \label{sec3} \noindent Let $X_{1},X_{2},\cdots ,X_{n}$ be an associated sequence of mean-zero random variables defined on the same probability space ($\Omega ,\mathcal{A},\mathbb{P}$). Define for each $n\geq 1,$ $ S_{n}=X_{1}+...+X_{n}.$ The CLT question for stationary associated sequence turns around Newman (see \cite{newman80}) results in which is proved that $ S_{n}/\sqrt{n}$ converges to a normal random variable $\mathcal{N}(0,\sigma ^{2})$ when \begin{equation*} \sigma ^{2}=\mathbb{V}ar(X_{1})+2\sum_{j=2}^{\infty }Cov(X_{1},X_{j})<+\infty . \end{equation*} \noindent And in such a situation, \begin{equation*} s_{n}^{2}=\mathbb{V}ar(S_{n}/\sqrt{n})\rightarrow \sigma ^{2}\text{ as } n\rightarrow +\infty . \end{equation*} \noindent A number of invariance principles and other CLT's are available but they are generally adaptations of this Newman result. As to the general case, Cox and Grimmet (see \cite{cox}), did not consider stationarity in their results which used triangular sequences. Formulated for simple sequences, their result is that $S_{n}/(s_{n}\sqrt{n})$ weakly converges to a normal random variable $\mathcal{N}(0,1)$ if $\ \mathbb{V}ar(X_{n})$ is asymptotically bounded below zero, and the sequence of the third moments $ \mathbb{E}\left\vert X_{n}\right\vert ^{3}$ is bounded and there exists a function $u(r)$, $r\in \{0,1,...\}$ such that \ $u(r)\rightarrow 0$ as $ r\rightarrow +\infty $ and such for all $k\geq 1$, and all $r\geq 0$ \begin{equation*} \sum_{j:|k-j|\geq r}Cov(X_{j},X_{k})\leq u(r). \end{equation*} \noindent Let us recall his $CLT$ as follows \begin{theorem} \label{cox} Let $X_{1},X_{2},\cdots ,X_{n}$ be an associated sequence of mean-zero random variables defined on the same probability space ($\Omega , \mathcal{A},\mathbb{P}$). Suppose there exist positive and finite constants $ c_{1}$ and $c_{2}$ such that \begin{equation} Var(X_{j})\geq c_{1}\text{ and }\mathbb{E}\left\vert X_{j}\right\vert ^{3}\leq c_{2}\text{ for all }j\geq 1 \label{coxA1}, \end{equation} \noindent and there is a function $u(r)$ of $r\in \mathbb{N}$ such that $ u(r)\rightarrow 0$ as r$\rightarrow +\infty $ and for any $r\geq 1,$ \begin{equation} \sup_{j\geq 1}\sum_{i:\left\vert j-i\right\vert \geq r}cov(X_{i},X_{j})\leq u(r). \label{coxA2} \end{equation} \noindent Then \begin{equation*} S_{n}/s_{n}\rightsquigarrow N(0,1)\text{ as }n\rightarrow +\infty . \end{equation*} \noindent where throughout the text, the symbol $\rightsquigarrow $ stands for the weak convergence. \end{theorem} \ \noindent Oliveira \textsl{et al.} \cite{paulo} have proved general \textit{CLT}'s, still using the Newman approach.\\ \noindent First, they obtained \begin{theorem}[see \protect\cite{paulo}, page 105, Theorem 4.4] \label{theooliv1} Let $X_{n}$, $n\in \mathbb{N}$, be centered, square-integrable and associated random variables. For each $n\in \mathbb{N}$ , let $\ell _{n}\in \mathbb{N}$ and $m_{n}=$ $\left[ \frac{n}{\ell _{n}} \right] $. Define, for $j=1,...,m_{n},$ $Y_{j,\ell _{n}}=\sum_{i=\left( j-1\right) \ell _{n}+1}^{j\ell _{n}}X_{i}$ and $Y_{m_{n}+1,\ell _{n}}=\sum_{i=m_{n}\ell _{n}+1}^{n}X_{i}$. Assume that $m_{n}\rightarrow +\infty$, and that \begin{equation} \frac{1}{s_{n}^{2}}\sum_{j=1}^{m_{n}}\mathbb{V}ar\left( Y_{j,\ell _{n}}\right) \rightarrow 1, \label{paulaA1} \end{equation} \begin{equation} \left\vert \mathbb{E}\exp \left( \frac{it}{s_{n}}S_{n}\right) -\prod\limits_{j=1}^{m_{n}}\mathbb{E}\exp \left( \frac{it}{s_{n}}Y_{j,\ell _{n}}\right) \right\vert \rightarrow 0,\text{ }t\in \mathbb{R}, \label{pauloA2} \end{equation} \noindent and \begin{equation} \forall \text{ }\varepsilon >0,\text{ }\frac{1}{s_{n}^{2}} \sum_{j=1}^{m_{n}}\int_{\left\{ \left\vert Y_{j,\ell _{n}}\right\vert \geq \varepsilon s_{n}\right\} }Y_{j,\ell _{n}}d\mathbb{P}\rightarrow 0. \label{pauloA3} \end{equation} \noindent Then \begin{equation*} \frac{1}{s_{n}}S_{n}\rightsquigarrow \mathcal{N}(0,1), \end{equation*} \end{theorem} \noindent Next, they obtained the following result using a Feller-Levy condition. \begin{theorem}[see \protect\cite{paulo}, page 108, Theorem 4.8] \label{theooliv2} Let $X_{n}$, $n\in \mathbb{N}$, be centered, square-integrable and associated random variables. Assume that \begin{equation} u\left( n\right) \rightarrow 0,\text{ }u\left( 1\right) <+\infty , \label{pauloB1} \end{equation} \begin{equation} \inf_{n\in N}\frac{1}{n}s_{n}^{2}>0, \label{pauloB2} \end{equation} \begin{equation} \forall \text{ }\varepsilon >0,\text{ }\frac{1}{s_{n}^{2}} \sum_{j=1}^{m_{n}}\int_{\left\{ \left\vert X_{j}\right\vert \geq \varepsilon s_{n}\right\} }X_{j}^{2}d\mathbb{P}\rightarrow 0. \label{pauloB3} \end{equation} \noindent Then \begin{equation*} \frac{1}{s_{n}}S_{n}\rightsquigarrow \mathcal{N}(0,1). \end{equation*} \end{theorem} \noindent \textbf{Remark on whether the assumptions of the theorem are enough to get the \textit{CLT}}. It seems to us that the conditions given by this theorem are not enough, as we tried to show it in Subsection \ref{subsec42} below. We think that the following assumption, denoted (\textit{Hab}) below, \begin{equation*} \frac{1}{s_{n}^{2}}\mathbb{V}ar\left( \sum_{i=m(n)\ell (n)+1}^{r(n)}X_{j}\right) \rightarrow 0\text{ as }n\rightarrow +\infty . \end{equation*} \noindent should be added. This assumption is implied by this simpler one, denoted (\textit{HNab}) below : \begin{equation*} \frac{1}{s_{n}^{2}}\sum_{i=t_{n}}^{u_{n}}var(X_{i})\rightarrow 0\text{ as } n\rightarrow \infty , \end{equation*} \noindent for $0\leq t_{n}\leq u_{n}\leq n,u_{n}-t_{n}\leq \ell(n),(u_{n}-v_{n})/n\rightarrow 0$ as $n\rightarrow \infty$.\\ \noindent For a stationary case, this assumption is immediate. The foundation of our remark is given in Point (1) in Subsubsection \ref{subsubsec423} of Subsection \ref{subsec42} in Section \ref{sec4}.\\ \noindent Our objective in this paper is to express \textit{CLT's} in the most general setting, still using the Newman approach and to derive the former results as particular cases. With respect to the former results described above, we simplify the approach and get the best we can do by formulating a Lyapounov-Feller-Levy type of \textit{CLT}. The general conditions are next expressed on moment conditions stated also in a general setting. Existing versions are all included in our statements. And we conclude that more general \textsl{CLT}'s cannot be obtained without getting out the Newman approach.\\ \noindent Let us begin by introduce the following assumptions \noindent There exists a sequence $\ell (n)$ of positive integers such that $ n=m(n)\ell (n)+r(n),$ with $0\leq r(n)<\ell (n),$ $0\leq m(n)\rightarrow +\infty $ and \begin{equation} (\ell (n)/n,r(n)/n)\rightarrow (0,0)\text{ \ }as\text{ \ }n\rightarrow +\infty . \tag{L} \end{equation} \noindent We want to stress that the integers $m=m(n),$ $\ell =\ell (n)$ and $r=r(n)$ depend of $n$ throughout the text even though we may and do drop the label $n $ in many situations for simplicty's sake.\\ \noindent On top of this general assumption, we may require the following ones. \begin{equation} \frac{\ell (n)}{s_{n^{2}}}\rightarrow 0\text{ as }n\rightarrow +\infty . \label{H0} \end{equation} \begin{equation} \frac{\ell (n)}{s_{n}^{2}}\sum_{j=1}^{m(n)}\mathbb{V}ar\left( \frac{S_{j\ell (n)}-S_{(j-1)\ell (n)}}{\sqrt{\ell (n)}}\right) \rightarrow 1\text{ as } n\rightarrow +\infty . \tag{Ha} \end{equation} \begin{equation} \frac{1}{s_{n}^{2}}\mathbb{V}ar\left( \sum_{i=m(n)\ell (n)+1}^{r(n)}X_{j}\right) \rightarrow 0\text{ as }n\rightarrow +\infty . \tag{Hab} \end{equation} \noindent \begin{equation} \sup_{1\leq j\leq m(n)+1}\frac{\ell (n)}{s_{n}^{2}}\mathbb{V}ar\left( \frac{ S_{j\ell (n)}-S_{(j-1)\ell (n)}}{\sqrt{\ell (n)}}\right) =C_{1}(n)\rightarrow 0\text{ as \ }j\rightarrow +\infty . \tag{Hb} \end{equation} \noindent We have for some $\delta >0,$ $\mathbb{E}\left\vert X_{j}\right\vert ^{2+\delta }<+\infty ,j\geq 1$ and the Lyapounov Condition holds \begin{equation} \frac{\ell ^{3/2}(n)}{s_{n}^{2+\delta }}\sum_{j=1}^{m}\mathbb{E}\left\vert \frac{S_{j\ell (n)}-S_{(j-1)\ell (n)}}{\sqrt{\ell (n)}}\right\vert ^{2+\delta }=C_{2}(n)\rightarrow 0\text{ as j}\rightarrow +\infty . \tag{Hc} \end{equation} \noindent In the sequel, it may be handy to use the notation \begin{equation} Y_{j,\ell }=\frac{S_{j\ell (n)}-S_{(j-1)\ell (n)}}{\sqrt{\ell (n)}},1\leq j\leq m=m(n). \label{THEY} \end{equation} \noindent In the next section, we will state two \textit{CLT}'s based on these hypotheses and next. A third ine will be a completition of Theorem \ref{theooliv2}. Next, the results are particularized into more specific versions.\\ \section{Results and Commentaries} \label{sec4} \noindent In this section, we present general \textit{CLT}'s for associated \textit{rv}'s and next give different forms in specific types of independent and dependent data and finally make a comparison with available results. \subsection{General \textit{CLT}'s} \label{subsec41} \noindent We have following results. \begin{theorem} \label{theo1} Let $X_{1},X_{2},\cdots ,X_{n}$ be an associated sequence of mean-zero random variables defined on the same probability space ($\Omega , \mathcal{A},\mathbb{P}$). If the sequence is stationary, then \begin{equation*} \dfrac{S_{n}}{\sqrt{n}}=\dfrac{X_{1}+X_{2}+\cdots +X_{n}}{\sqrt{n}} \rightsquigarrow \mathcal{N}(0,\sigma )\ as\ n\rightarrow +\infty , \end{equation*} \noindent In the general setting, if \textit{(L)}, (H0), \textit{(Ha)}, \textit{(Hb)} and \textit{(Hc)} hold, then \begin{equation*} \dfrac{S_{n}}{s_{n}}=\dfrac{X_{1}+X_{2}+\cdots +X_{n}}{s_{n}} \rightsquigarrow \mathcal{N}(0,1)\ as\ n\rightarrow +\infty . \end{equation*} \end{theorem} \noindent Next, we state a Lyapounov-Feller-Levy type theorem given some assumptions. \begin{theorem} \label{theo2} Let $X_{1},X_{2},\cdots ,X_{n}$ be an associated sequence of mean-zero random variables defined on the same probability space ($\Omega , \mathcal{A},\mathbb{P}$). Denote for each $j\in \{1,m\},\tau _{j}^{2}=Var\left( S_{j\ell }-S_{(j-1)\ell }\right) =\mathbb{E}\left( S_{j\ell }-S_{(j-1)\ell }\right) ^{2}$ and \begin{equation*} \nu _{m(n)}^{2}=\tau _{1}+...+\tau _{m(n)},n\geq 1\text{.} \end{equation*} \noindent Assume that the assumptions \textit{(L)} and \textit{(Ha) hold and } either (Hab) or \textit{(Hb) is true. The }we have the following equivalence result : \begin{equation*} \max_{1\leq k\leq m(n)}^{2}\mathbb{E}\left( S_{j\ell }-S_{(j-1)\ell }\right) /s_{n}^{2}\rightarrow 0\text{ as n}\rightarrow +\infty \end{equation*} \noindent and \begin{equation*} S_{n}/s_{n}\rightsquigarrow \mathcal{N}(0,1)\ as\ n\rightarrow +\infty , \end{equation*} \noindent if and only if for any $\varepsilon >0$, \begin{equation} \frac{1}{s_{n}^{2}}\mathbb{E}\left( \left( S_{j\ell }-S_{(j-1)\ell }\right) ^{2}1_{\left( \left\vert S_{j\ell }-S_{(j-1)\ell }\right\vert \geq \varepsilon \nu _{m(n)}\right) }\right) \rightarrow 0\text{ as }n\rightarrow 0. \label{fellerLevy01b} \end{equation} \noindent Moreover the sequence $(\nu _{m(n)})_{n\geq 1}$ may be replaced by the sequence of $(s_{n})_{n\geq 1}$\ in Condition (\ref{fellerLevy01b}) to give \begin{equation} \frac{1}{s_{n}^{2}}\mathbb{E}\left( \left( S_{j\ell }-S_{(j-1)\ell }\right) ^{2}1_{\left\vert S_{j\ell }-S_{(j-1)\ell }\right\vert \geq \varepsilon s_{n}}\right) \rightarrow 0\text{ as }n\rightarrow 0. \label{fellerlevy01c} \end{equation} \end{theorem} \begin{theorem} \label{theo3} Let $X_{1},X_{2},\cdots ,X_{n}$ be an associated sequence of mean-zero random variables defined on the same probability space ($\Omega , \mathcal{A},\mathbb{P}$). If \begin{equation*} \left\vert \Psi _{\frac{S_{m\ell }}{s_{n}}}(t)-\prod\limits_{j=1}^{m}\Psi _{Y_{j,n}}\left( \frac{\sqrt{\ell }}{s_{n}}t\right) \right\vert \rightarrow 0 \text{ as }n\rightarrow +\infty, \end{equation*} \noindent then we have the following equivalence result : \begin{equation*} \max_{1\leq k\leq m(n)}^{2}\mathbb{E}\left( S_{j\ell }-S_{(j-1)\ell }\right) /s_{n}^{2}\rightarrow 0\text{ as n}\rightarrow +\infty \end{equation*} \noindent and \begin{equation*} S_{n}/s_{n}\rightsquigarrow \mathcal{N}(0,1)\ as\ n\rightarrow +\infty , \end{equation*} \noindent if and only if for any $\varepsilon >0$, \ Formula \ref {fellerLevy01b} holds. \end{theorem} \begin{remark} \noindent Let us make the following remarks.\newline \noindent \textbf{(1)} The method of proving the aboved theorem will consist of decomposing the sums of variables into sums of blocks of variables and treating these as if they were independent. Naturally, we will need some control on the approximation between the sums of the dependent blocks and their independent counterparts. This control will be achieved using characteristic functions and is based on the inequality in Lemma \ref{lemg3} of Newman. Our approach is to go the farest possible only using the moments conditions.\newline \noindent \textbf{(2)} Theorem \ref{theo2} is not yet a Lyapounov-Feller-Levy Theorem $(LFLT)$. Using Lemma \ref{lemg3}, its only say we have a $LFLT$ provided assumptions that make the CLT problem into a CLT one concerning indedendent variables. A full $LFLT$ can not be achieved as long as the proofs are based on the approximation of Lemma \ref{lemg3}. \end{remark} \noindent Before we proceed to the proofs in Subsection \ref{subsec5}, we are now going to derive some consequences and particular cases of the theorem. \subsection{Commentaries and Consequences} \label{subsec42} \subsubsection{The most general approach leading to a Feller-Levy CLT type} \label{subsubsec421} We begin a general comments of the approach.\\ \noindent Almost all the available \textit{CLT} results used the Newman's method based on Lemma \ref{lemg3}. The approach we used is intended to get the sharpest results we can get in that frame. In ealier versions of our results, we were not aware of the results of Oliveira \textit{et al. \cite{paulo}.} However, with the knowledge of these results, our works still present a number of significant avantages we want to highlight here. Actually, Oliveira et al. \cite{paulo} attains the best we can do in the Newman approach : the best and unique way to find out assumptions under which \begin{equation*} S_{n}/s_{n}\rightarrow N(0,1), \end{equation*} \noindent holds, is to reduce it to \begin{equation} \prod\limits_{j=1}^{m}\Psi _{Y_{j,n}}\left( \frac{\sqrt{\ell }}{s_{n}} t\right) \rightarrow \exp (-t^{2}/2)\text{ }as\text{ }n\rightarrow +\infty . \label{C1} \end{equation} \noindent after are defined the random variables $Y_{j,\ell }$ of characteristic functions $\Psi _{Y_{j,n}}$ given in Formula (\ref{THEY}) based on the decomposition \textit{(L)} (We recall that our notation of $Y_{j,\ell }$ are not the sams as that of \cite{paulo}).\\ \noindent This is the justification of Assumption (\ref{pauloA2}) above, which corresponds to the equivalent one we used, which is \begin{equation} \left\vert \Psi _{\frac{S_{m\ell }}{s_{n}}}(t)-\prod\limits_{j=1}^{m}\Psi _{Y_{j,n}}\left( \frac{\sqrt{\ell }}{s_{n}}t\right) \right\vert \rightarrow 0 \text{ as }n\rightarrow +\infty . \label{C2} \end{equation} \noindent From there, the authors did not, as far as we know, capitalize this fact in order to have a Feller-Levy final \textit{CLT} version, as we did in Theorem \ref{theo3}. In our view, this version is the starting point for new \textit{CLT}'s out of the Newman approach. \subsubsection{General condition} \label{subsubsec422} Based on the best the Newman approach can give, it remains to have the most general conditions to ensure (\ref{C1}) and (\ref{C2}). If we wish to directly express (\ref{C2}) into the $X_{i}$'s, Oliveira \textit{et al.} \cite{paulo} proved in page 109 that their assumption (\ref{pauloB3}) implies Formula (\ref{fellerlevy01c}) in Theorem \ref{theo2} above. In general, authors usually provide \textit{CTL}'s based on conditions ensuring (\ref{C1}) and (\ref{C2}).\\ \noindent In that specific case, we proceeded into two directions :\\ \noindent \textbf{(1)} Expressing general conditions based on moments. We will see in the next subsection, how the available \textit{CLT}'s may be derived from Theorem \ref{theo2}.\\ \noindent \textit{(2)} Keeping the notation of decomposition $(L)$ in the assumptions. This will allow, in particular cases, to base methods on specific values of $m(n)$ and $\ell (n)$ \subsubsection{Comparisons} \label{subsubsec423} Let us highlight some comparison results.\\ \noindent \textbf{(1)} \textbf{With Theorem \ref{theooliv2}. A possible gap in Theorems \ref{theooliv2} of \cite{paulo}}.\\ \noindent By setting \begin{equation*} Y_{j,\ell (n)}^{\ast }=Y_{j,\ell (n)}\text{ for }j=1,...,m(n)\text{ and } Y_{m+1,\ell }^{\ast }=\sum_{i=m(n)\ell (n)+1}^{r(n)}X_{i}/\sqrt{\ell (n)}, \end{equation*} \noindent we have \begin{equation*} S_{n}=\ell (n)\sum_{j=1}^{m(n)+1}Y_{j,\ell (n)}^{\ast }. \end{equation*} \noindent It comes that \begin{eqnarray*} s_{n}^{2}=Var(S_{n})&=&\ell (n)\sum_{j=1}^{m(n)}var(Y_{j,\ell (n)})+\ell ^{2}Var(Y_{m(n)+1,\ell (n)}^{\ast })\\ &+&2\ell (n)\sum_{1\leq h<k\leq m(n)+1}cov(Y_{h,\ell (n)}^{\ast },Y_{k,\ell (n)}^{\ast }). \end{eqnarray*} \noindent By definition of the Cox coefficient (as denamed by Bulinski \textit{et al.} \ \cite{bulinski2007}\ ), and since the indices of the $X_{i}$'s in $ cov(Y_{h,\ell }^{\ast },Y_{k,\ell }^{\ast })$ are distanced by $1,...,\ell $ points in absolute values, and the $X_{i}$'s are therein normalized by $ \sqrt{\ell }$, we have \begin{eqnarray*} \sum_{1\leq h<k\leq m(n)+1}cov(Y_{h,\ell (n)}^{\ast },Y_{k,\ell (n)}^{\ast }) &\leq &\frac{1}{\ell }\sum_{h=1}^{\ell (n)}\sup_{i\geq 1}\sum_{k:\left\vert k-i\right\vert \geq h}cov(X_{i},X_{k}) \\ &\leq &\frac{1}{\ell} \sum_{i=1}^{\ell(n)}u(i), \end{eqnarray*} \noindent and then \begin{equation*} s_{n}^{2}=Var(S_{n})\leq \ell \sum_{j=1}^{m(n)}var(Y_{j,\ell (n)})+\ell (n)^{2}Var(Y_{m(n)+1,\ell (n)}^{\ast })+2\sum_{i=1}^{\ell (n)}u(i), \end{equation*} \noindent which gives \begin{eqnarray*} \left\vert 1-\frac{\ell (n)}{s_{n}^{2}}\sum_{j=1}^{m}var(Y_{j,\ell (n)}- \frac{\ell (n)^{2}}{s_{n}^{2}}Var(Y_{m(n)+1,\ell (n)}^{\ast })\right\vert &\leq &\frac{2\ell (n)}{s_{n^{2}}}\left\{ \frac{1}{\ell (n)}\sum_{i=1}^{\ell(n)}u(i)\right\} \\ &\rightarrow &0\text{ as }n\rightarrow \infty , \end{eqnarray*} \noindent by C\'{e}saro's Lemma if \begin{equation} \lim \sup_{n\rightarrow +\infty }\frac{\ell (n)}{s_{n^{2}}}<+\infty . \label{pauloB2S} \end{equation} \noindent Here, it seems to us that the authors of \cite{paulo} might have not taken into a account the term $\ell Y_{m+1,\ell }^{\ast }$ in the line --7 of their page 108. At line -6 of the same page, their formula $S_{n}=Y_{1,\ell _{n}}+...+Y_{m_{n},\ell _{n}}$ also misses to include the remaining $X_{i}$ 's corresponding to $i\in [m_{n}\ell _{n}+1, m_{n}\ell _{n}+r_{n}]$, where $m_{n}$ is the integer part of $n/\ell _{n}$ and $r_{n}=n-m_{n}\ell _{n}.$ And although it is possible to get rid of the term $\ell Y_{m+1,\ell }^{\ast}$ in line -1 of their page 108 as we explained in the lines following the remark $(R2)$ in the proof of Theorem \ref{theo1} below, we still think it would be handled in the proof of Formula (\ref{C2}), as we did at the stage of Formula (\ref{C1A}) of the same proof below.\\ \noindent Based on this remark, the hypotheses (\ref{pauloB1}) and (\ref{pauloB2}) are true and our ($Hab)$ holds. Then Formula (\ref{pauloB2S}) holds and Formula (\ref{C2}) is true. The Feller-Levy theorem handles the remaining part. Further \begin{equation*} \frac{\ell (n)^{2}}{s_{n}^{2}}Var\left( Y_{m(n)+1,\ell (n)}^{\ast }\right) \leq \frac{1}{s_{n}^{2}}\sum_{i=m(n)\ell (n)+1}^{r(n)}var(X_{i})+\frac{2\ell (n)}{s_{n^{2}}}\left\{ \frac{1}{\ell (n)}\sum_{i=1}^{\ell (n)}u(i)\right\} . \end{equation*} \noindent Then, if Assumption (\ref{pauloB1}) and (\ref{pauloB2S}) hold, then ($Hab)$ is implied by a general condition of the form \begin{equation} \frac{1}{s_{n}^{2}}\sum_{i=t_{n}}^{u_{n}}var(X_{i})\rightarrow 0\text{ as } n\rightarrow \infty , \label{HNab} \end{equation} for $0\leq t_{n}\leq u_{n}\leq n,u_{n}-t_{n}\leq \ell (n),(u_{n}-v_{n})/n\rightarrow 0$ as $n\rightarrow \infty$.\\ \noindent \textbf{(2) With Cox-Grimmet Theorem \ref{cox}}.\\ \noindent It is immeadiate that the first part of Assumption (\ref{coxA1}) in that theorem, that is \begin{equation*} Var(X_{j})\geq c_{1}>0, \end{equation*} \noindent implies, by association, that \begin{equation*} s_{n}^{2}\geq \sum_{i=1}^{n}var(X_{i})\geq nc_{1} \end{equation*} \noindent and (\ref{pauloB2S}) holds since \begin{equation*} \lim \sup_{n\rightarrow +\infty }\frac{\ell (n)}{s_{n^{2}}}=\lim \sup_{n\rightarrow +\infty }\frac{n}{s_{n^{2}}}\times \frac{\ell (n)}{n}\leq c_{1}\lim \sup_{n\rightarrow +\infty }\frac{\ell (n)}{n}\leq c_{1}. \end{equation*} \noindent Next, the second part, that is \begin{equation*} E\left\vert X_{j}\right\vert ^{3}\leq c_{2}<+\infty ,j\geq 1, \end{equation*} \noindent implies, by the formula $\left\vert x\right\vert ^{p}\leq 1+\left\vert x\right\vert ^{q}$ for $1\leq p\leq q$ (see \cite{loeve}, page \ 157), that for $c_{3}=1+c_{2}$, \begin{equation*} E\left\vert X_{j}\right\vert ^{3}\leq c_{3},j\geq 1, \end{equation*} \noindent and then Formula (\ref{HNab}) above holds since \begin{equation*} \frac{1}{s_{n}^{2}}\sum_{i=t_{n}}^{u_{n}}var(X_{i})\leq c_{3}\frac{ (u_{n}-t_{n})}{s_{n}^{2}}\leq \frac{c_{3}}{c_{1}}\frac{(u_{n}-t_{n})}{n} \rightarrow 0. \end{equation*} \noindent Next, by re-making the considerations given in Subsubsection \ref{subsubsec422}, Formula (\ref{fellerlevy01c}) of Theorem \ref{theo2} holds if \begin{equation*} \frac{1}{s_{n}^{2}}\sum_{j=1}^{m(n)}\int_{\left\{ \left\vert X_{j}\right\vert \geq \varepsilon s_{n}\right\} }X_{j}^{2}d\mathbb{P} \rightarrow 0\text{ as }n\rightarrow +\infty , \end{equation*} \noindent for any $\varepsilon >0$. But we have under Condition \ref{coxA1} \ of Theorem \ref{cox}, \begin{eqnarray*} \frac{1}{s_{n}^{2}}\sum_{j=1}^{m(n)}\int_{\left\{ \left\vert X_{j}\right\vert \geq \varepsilon s_{n}\right\} }X_{j}^{2}d\mathbb{P} & \mathbb{=}&\frac{1}{s_{n}^{2}}\sum_{j=1}^{m(n)}\int_{\left\{ \left\vert X_{j}\right\vert \geq \varepsilon s_{n}\right\} }\frac{\left\vert X_{j}\right\vert ^{3}}{\left\vert X\right\vert }d\mathbb{P} \\ &\leq &\frac{1}{\varepsilon s_{n}^{3}}\sum_{j=1}^{m(n)}\int_{\left\{ \left\vert X_{j}\right\vert \geq \varepsilon s_{n}\right\} }\left\vert X_{j}\right\vert ^{3} \\ &\leq &\frac{m(n)c_{2}}{\varepsilon s_{n}^{3}}\leq \frac{c_{2}}{c_{1}^{3/2}} \frac{m(n)}{n^{3/2}}\rightarrow 0. \end{eqnarray*} \noindent Hence, Condition (\ref{fellerlevy01c}) is true. Finally Condition (\ref{coxA2}) ensures (\ref{C2}) and the Cox Theorem \ref{cox} is obtained. \subsubsection{Conclusion} \label{subsubsec424} We conclude in two points.\\ \noindent \textbf{(A)} By combining our results with especially those of Oliveira \textit{et al.} \cite{paulo}, we have proved that the Newman approach already gave the best results in a Lyapounov-Feller-Levy type of \textit{CLT}. It is still possible to find different more or less sharp expressions of Conditions (\ref{C1}) and (\ref{C2}), stated in Subsection \ref{subsec42}. But no very different results cannot be expected there. Yet, the \textit{CLT} problem is largement open since the current results use the Newman approach. Is it possible to get rid of this approach and and to use another one more general to establish more general \textbf{CLT}'s? This seems to be the direction to be taken.\\ \noindent \textbf{(B)} In \cite{LAH2016}, an associated sequence is studied as a particluar case. Using a direct method based on the characteristic function methid, it has been shown to satistify the \textit{CLT} property. Yet, this sequence did not satisfy the Cox-Grimmet condition $\inf_{n\geq 1}\mathbb{E}X_{i}^{2}\geq c_{1}>0$. This kind of work may constitute a lead to more general \textit{CLT}.\\ \subsection{Proof of Theorem \ref{theo1}} \label{subsec5} As almost all the proofs of \textit{CLT}'s for associated or weakly associated rv's, our proof is based on the three steps of the original method of Newman and Wright (see \cite{newmanwright}). For compact notation sake, we simply set $\ell (n)=\ell $ and $m(n)=m$. Let us define $ \Psi _{\frac{S_{n}}{s_{n}}}(t)=\mathbb{E}\left( e^{itS_{n}/s_{n}}\right) ,$ $ t\in \mathbb{R}$.\\ \noindent First, we have for $t\in \mathbb{R}$, \begin{equation*} \left\vert \Psi _{_{\frac{S_{n}}{s_{n}}}}(t)-\Psi _{_{\frac{S_{m\ell }}{s_{n} }}}(t)\right\vert =\left\vert \mathbb{E}(e^{itS_{n}/s_{n}})-\mathbb{E} (e^{itS_{m\ell }/s_{n}})\right\vert \end{equation*} \begin{equation*} =\left\vert \mathbb{E}\left[ e^{itS_{m\ell }/\sqrt{m\ell }}\left( e^{it\left[ (S_{n}/s_{n})-(S_{m\ell }/s_{n})\right] }-1\right) \right] \right\vert \end{equation*} \begin{equation} \leq \mathbb{E}\left\vert e^{it\left( \frac{S_{n}}{s_{n}}-\frac{S_{m\ell }}{ s_{n}}\right) }-1\right\vert . \label{b} \end{equation} \noindent But for any $x\in \mathbb{R}$, \begin{equation*} \left\vert e^{ix}-1\right\vert =|(\cos x-1)+i\sin x|=\left\vert 2\sin \frac{x }{2}\right\vert \leq |x|. \end{equation*} \noindent Thus the second member of $(\ref{b})$ is, by the Cauchy-Schwarz's inequality, bounded by \begin{equation*} |t|\mathbb{E}\left\vert \frac{S_{n}}{s_{n}}-\frac{S_{m\ell }}{s_{n}} \right\vert \leq |t|\mathbb{V}ar\left( \frac{S_{n}}{s_{n}}-\frac{S_{m\ell }}{ s_{n}}\right) ^{\frac{1}{2}} \end{equation*} \noindent and \begin{equation*} \delta _{m,\ell }=\mathbb{V}ar\left( \frac{S_{n}}{s_{n}}-\frac{S_{m\ell }}{ s_{n}}\right) =\frac{1}{s_{n}^{2}}\mathbb{V}ar\left( S_{n}-S_{m\ell }\right), \end{equation*} \noindent which tends to zero as $n\rightarrow +\infty $ by $(Hb)$ since \begin{eqnarray} \delta _{m,\ell } &=&\frac{1}{s_{n}^{2}}\mathbb{V}ar\left( \ \sum\limits_{i=1}^{r}X_{m\ell +i}\right) \label{C1A} \\ &\leq &\frac{\ell }{s_{n}^{2}}\mathbb{V}ar\left( \frac{1}{\sqrt{\ell }}\ \sum\limits_{i=1}^{\ell }X_{m\ell +i}\right) \\ &\leq &C_{1}(n)\rightarrow 0. \end{eqnarray} \noindent This proves that \begin{equation} |\Psi _{\frac{S_{n}}{s_{n}}}(t)-\Psi _{\frac{S_{m\ell }}{s_{n}} }(t)|\rightarrow 0\text{ as }n\rightarrow +\infty . \label{commonStep01} \end{equation} \noindent \textbf{(R1)} Remark also for the purpose of Theorem \ref{theo2} that the same conclusion holds when $(Hab)$ is true and we do not need (\textit{Hb}) in addition.\\ \noindent Next, remind that $Y_{j,\ell }=(S_{j\ell }-S_{\ell (j-1)})/\sqrt{\ell }$ , for $1\leq j\leq m$. \ Observe that \begin{equation*} \frac{S_{m\ell }}{s_{n}}=\frac{\sqrt{\ell }}{s_{n}}\sum_{j=1}^{m}Y_{j,\ell }. \end{equation*} \noindent According to the Newman's inequality (see Lemma \ref{lemg3}), we have \begin{equation*} \left\vert \Psi _{\frac{S_{m\ell }}{s_{n}}}(t)-\prod\limits_{j=1}^{m}\Psi _{Y_{j,n}}\left( \frac{\sqrt{\ell }}{s_{n}}t\right) \right\vert \leq \frac{ \ell t^{2}}{2s_{n}^{2}}\sum_{1\leq j\neq k\leq m}Cov(Y_{j,\ell },Y_{k,\ell }). \end{equation*} \noindent But, \begin{eqnarray*} \frac{\ell t^{2}}{2s_{n}^{2}}\sum_{1\leq j\neq k\leq m}Cov(Y_{j,\ell },Y_{k,\ell }) &=&\frac{\ell t^{2}}{2s_{n}^{2}}\mathbb{V}ar\left( \sum_{j=1}^{m}Y_{j,\ell }\right) -\frac{\ell t^{2}}{2s_{n}^{2}}\sum_{j=1}^{m} \mathbb{V}ar(Y_{j,\ell }) \\ &=&\frac{t^{2}}{2}\left[ \mathbb{V}ar\left( \frac{\sqrt{\ell }}{s_{n}} \sum_{j=1}^{m}Y_{j,\ell }\right) -\frac{\ell }{s_{n}^{2}}\sum_{j=1}^{m} \mathbb{V}ar\left( Y_{j,\ell }\right) \right] \\ &=&\frac{t^{2}}{2}\left[ \mathbb{V}ar\left( \frac{1}{s_{n}}S_{m\ell }\right) -\frac{\ell }{s_{n}^{2}}\sum_{j=1}^{m}\mathbb{V}ar\left( \frac{S_{j\ell }-S_{\ell (j-1)}}{\sqrt{\ell }}\right) \right] \\ &\leq &\frac{t^{2}}{2}\left[ 1-\frac{\ell }{s_{n}^{2}}\sum_{j=1}^{m}\mathbb{V }ar\left( \frac{S_{j\ell }-S_{\ell (j-1)}}{\sqrt{\ell }}\right) \right]\\ &-&\frac{t^{2}}{2s_{n}^{2}}\mathbb{V}ar\left( \ \sum\limits_{j=m\ell +1}^{n}X_{j}\right) , \end{eqnarray*} \noindent which tends to zero as $n\rightarrow +\infty $ by \textit{(Ha)} and \textit{(Hb)}, that is \begin{equation} \left\vert \Psi _{\frac{S_{m\ell }}{s_{n}}}(t)-\prod\limits_{j=1}^{m}\Psi _{Y_{j,n}}\left( \frac{\sqrt{\ell }}{s_{n}}t\right) \right\vert \rightarrow 0 \text{ as }n\rightarrow +\infty . \label{conclusion01} \end{equation} \noindent The proof will be completed by establishing that \begin{equation} \prod\limits_{j=1}^{m}\Psi _{Y_{j,n}}\left( \frac{\sqrt{\ell }}{s_{n}} t\right) \rightarrow \exp (-t^{2}/2)\text{ as }n\rightarrow +\infty . \label{lastStep} \end{equation} \noindent \textbf{(R2)} Here, we make a second remark which is relevant to the Proof of Theorem \ref{theo2} and next to generalisations of the results. The above computations led to \begin{eqnarray*} 0 &\leq &\frac{\ell t^{2}}{2s_{n}^{2}}\sum_{1\leq j\neq k\leq m}Cov(Y_{j,\ell },Y_{k,\ell })=\frac{t^{2}}{2}\left[ 1-\frac{\ell }{s_{n}^{2} }\sum_{j=1}^{m}\mathbb{V}ar\left( \frac{S_{j\ell }-S_{\ell (j-1)}}{\sqrt{\ell }}\right) \right]\\ &-&\frac{t^{2}}{2s_{n}^{2}}\mathbb{V}ar\left( \ \sum\limits_{j=m\ell +1}^{n}X_{j}\right) \\ &\leq &\frac{t^{2}}{2}\left[ 1-\frac{\ell }{s_{n}^{2}}\sum_{j=1}^{m}\mathbb{V }ar\left( \frac{S_{j\ell }-S_{\ell (j-1)}}{\sqrt{\ell }}\right) \right] . \end{eqnarray*} \noindent Then only \textit{(Ha)} is needed to ensure \ref{lastStep}.\\ \noindent Now, we resume to the normal course of our demonstration. From this step, the conclusion on the weak law of $S_{n}/s_{n}$, comes uniquely from Formula (\ref{lastStep}) which expresses the weak convergence of sums of the form \begin{equation} T_{m(n)}^{\ast }=\frac{1}{s_{n}}\sum_{j=1}^{m(n)}V_{j}, \label{sumIndep} \end{equation} \noindent where the the $V_{j}$'s are independent random variables such that for each $j\in \{1,m\},V_{j}^{\ast }$ has the same law as $S_{j\ell }-S_{(j-1)\ell }. $ Remind that, for each $j\in \{1,m\},\tau _{j}^{2}=Var\left( S_{j\ell }-S_{(j-1)\ell }\right) =\mathbb{E}\left( S_{j\ell }-S_{(j-1)\ell }\right) ^{2}$ and \begin{equation*} \nu _{m(n)}^{2}=\tau _{1}+...+\tau _{m(n)},n\geq 1\text{.} \end{equation*} \noindent By Assumption $(Ha),$ we have $\nu _{m(n)}/s_{n}\rightarrow 1$ as $ n\rightarrow +\infty $ and by Slustsky theorem (see for example Proposition 15 in \cite{wcia-srv-ang}, page 60), the weak convergence, if it holds, would be the same as that of \begin{equation*} T_{m(n)}=\frac{1}{v_{m(n)}}\sum_{j=1}^{m(n)}V_{j}. \end{equation*} \noindent Condition (\textit{Hb}) is the Lyapounov's one for this problem (see Lo\`{e}ve \cite {loeve}, page 287, Point B), where $v_{m(n)}$ is replaced by $s_{n}$. \noindent This completes the proof.\newline \subsection{Proof of Theorem \ref{theo2}} Based of the remarks marqued (\textbf{R1}) and \textbf{(R2)} in the body of the proof of Theorem \ref{theo1}, we conclude that if \textit{(L)}, \textit{(Ha) and (Hab) hold, } the conclusion on the weak law of $S_{n}/s_{n}$, comes uniquely from Formula (\ref{lastStep}). At this step, the condition on the $(2+\delta )^{th}$ moments, that $\mathbb{E}\left\vert X_{j}\right\vert ^{2+\delta }<+\infty $,$ j\geq 1,$ is not required. And, Formula \ref{lastStep} expresses the weak convergence of the sums defined in (\ref{sumIndep}).\\ \noindent From there, the problem becomes the classical Lyapounov-Levy-Feller Theorem. And we have the following conclusion : \noindent \textbf{(a)} $\max_{1\leq k\leq m(n)}\{\tau _{j}/\nu _{m(n)}\}\rightarrow 0$ as $n\rightarrow +\infty $ and \begin{equation*} \frac{1}{v_{m(n)}}\sum_{j=1}^{m(n)}V_{j}\rightsquigarrow \mathcal{N}(0,1) \text{ as }n\rightarrow +\infty , \end{equation*} \noindent if and only if \newline \noindent \textbf{(b)} for any $\varepsilon >0,$ \begin{equation*} g(\varepsilon )=\frac{1}{v_{m(n)}^{2}}\int_{(\left\vert x\right\vert \geq \varepsilon v_{m(n)})}x^{2}dF_{V_{j}}\rightarrow 0\text{ as }n\rightarrow +\infty . \end{equation*} \noindent These two conditions are exactly those given in the statement of the theorem, where the replacement of $(\left\vert x\right\vert \geq \varepsilon v_{m(n)})$ by $(\left\vert x\right\vert \geq \varepsilon s_{n})$ in the expression of $g$ is possible because of $\nu _{m(n)}/s_{n}\rightarrow 1$ as $n\rightarrow +\infty $.\newline \noindent This finishes the proof on this theorem. \subsection{Proof of Theorem \protect\ref{theo3}} The proof of Theorem \ref{theo3} is based on that of Theorem \ref{theo1} from Formula (\ref{conclusion01}).\\ \textbf{Acknowledgment} The second author acknowledges support from the World Bank Excellence Center (CEA-MITIC) that is continuously funding his research activities starting 2014. The first author thanks the \textbf{ Programme de formation des formateurs} of USSTB who financed his stays in the LERSTAD of UGB while preparing his Ph.D dissertation. Both authors acknowledge support from the \textbf{R\'{e}seau EDP - Mod\'{e}lisation et Contr\^{o}le}, of Western African Universities, that financed travel and accomodation of the second author while visiting USTTB in preparation of a series of works with his PhD students there. \end{document}
arXiv
\begin{document} \title{Universal hyperparallel hybrid photonic quantum gates with dipole-induced transparency in the weak-coupling regime\footnote{Published in Phys. Rev. A \textbf{91}, 032328 (2015)}} \author{Bao-Cang Ren, Guan-Yu Wang, and Fu-Guo Deng\footnote{Corresponding author: [email protected]} } \address{ Department of Physics, Applied Optics Beijing Area Major Laboratory, Beijing Normal University, Beijing 100875, China } \date{\today } \begin{abstract} We present the dipole induced transparency (DIT) of a diamond nitrogen-vacancy center embedded in a photonic crystal cavity coupled to two waveguides, and it is obvious with the robust and flexible reflectance and transmittance difference of circularly polarized lights between the uncoupled and the coupled cavities even in the bad cavity regime (the Purcell regime). With this DIT, we propose two universal hyperparallel hybrid photonic quantum logic gates, including a hybrid hyper-controlled-not gate and a hybrid hyper-Toffoli gate, on photon systems in both the polarization and the spatial-mode degrees of freedom (DOFs), which are equal to two identical quantum logic gates operating simultaneously on the systems in one DOF. They can be used to perform more quantum operations with less resources in the quantum information protocols with multi-qubit systems in several DOFs, which may depress the resources consumed and the photonic dissipation. Moreover, they are more robust against asymmetric environment noise in the weak-coupling regime, compared with the integration of two cascaded quantum logic gates in one DOF. \end{abstract} \pacs{03.67.Lx, 42.50.Ex, 42.50.Pq, 78.67.Hc} \maketitle \section{Introduction} The quantum computer is powerful in quantum information processing because of its fascinating capability of parallel computing, according to quantum mechanics theory \cite{QC}. Quantum logic gates are the key elements to precisely control and manipulate quantum states in quantum computation. Many proposals have been proposed to implement quantum logic gates with several physical systems both in theory and in experiment \cite{CQ}, such as the ion trap \cite{NTI}, nuclear magnetic resonance \cite{NMR}, quantum dot \cite{QD3}, superconducting qubits \cite{SCCQ}, and photon systems \cite{linear,nonlinear,mula}. In practice, there are still several obstacles required to be overcome in the implementation of universal quantum logic gates, especially for the interaction between qubits. The optical nonlinearity of cavity quantum electrodynamics (QED) holds great promise for photon-photon, photon-dipole, and dipole-dipole interactions, and it has been used to complete some important tasks in quantum information processing, such as entanglement generation \cite{mula2,QD1,QD} and quantum logic gates \cite{QD4,QD5,mula}. Usually, the approaches for light-dipole interaction in cavity QED are focused on the strong-coupling regime \cite{mula,QD1,mula1}, which is always referred to the high-$Q$ regime with the vacuum Rabi frequency of a dipole ($g$) beyond both the cavity and the dipole decay rates. The strong coupling between a single atom and a photon has been demonstrated experimentally with cavity QED in the past few years \cite{SC,SC1}, and it has been used to implement the quantum logic gate between a single photon and a single trapped atom in experiment recently \cite{SC1}. In a bad cavity regime, called the Purcell regime \cite{Pur} with the cavity decay rate much bigger than the dipole decay rate, the interesting nonlinear optical property can also be observed with a much smaller coupling strength $g$. With the Purcell effect, the dipole-induced transparency (DIT) can be used for quantum information processing in the weak-coupling regime (low-$Q$ regime) \cite{QD,QD4,RT}. The fiber-optical switch \cite{FOS} and the quantum phase switch \cite{QPS} for photons have been demonstrated experimentally in the Purcell regime. A nitrogen vacancy (NV) center in diamond is a promising candidate for a solid-state matter qubit (a dipole emitter) in cavity QED due to its long electron-spin decoherence time even at room temperature \cite{NV}. The approaches about an NV center in diamond coupled to an optical cavity (or a nanomechanical resonator) have been investigated both in theory \cite{CNV} and in experiment \cite{CNV1,CNV3,CNV4,CNV2,CNV5}. The NV-center spins in diamonds are very useful in quantum networks for algorithms and quantum memories \cite{NV7,NV8}. The quantum entanglement between the polarization of a single photon and the electron spin of an NV center in diamond has been produced in experiment \cite{NV2}, and the Faraday effect induced by the single spin of an NV center in diamond coupling to light has been observed in experiment as well \cite{NV3}, which have facilitating applications in quantum information processing. In this paper, we show that the DIT of a double-sided cavity-NV-center system (an NV center in diamond embedded in a photonic crystal cavity coupled to two waveguides) can be used for the photon-photon interaction in both the polarization and spatial-mode degrees of freedom (DOFs). In the Purcell regime, the DIT is still obvious with the robust and flexible reflectance and transmittance difference of circularly polarized lights between the uncoupled and the coupled cavities. With this DIT, we construct a hybrid polarization-spatial hyper-controlled-not (CNOT) gate on a two-photon system, which is equal to two CNOT gates operating simultaneously on a four-photon system in one DOF. Also, we present a hybrid polarization-spatial hyper-Toffoli gate on a three-photon system and it is equal to two Toffoli gates on a six-photon system in one DOF. These universal hyperparallel hybrid photonic quantum gates can reduce the resources consumed in quantum information processing, and they are more robust against the photonic dissipation noise, compared with the integration of two cascaded quantum logic gates in one DOF. They have high fidelities in the symmetrical regime of double-sided cavity-NV-center systems and they can depress the asymmetric environment noise in the weak-coupling regime with a small Purcell factor. They can form universal hyperparallel photonic quantum computing assisted by single-photon rotations. Besides, they are useful for the quantum information protocols with multi-qubit systems in several DOFs, for example, the preparation of two-photon hyperentangled states and the complete analysis for them. \section{DIT for double-sided cavity-NV-center system}\label{sec2} A negatively charged NV center in diamond consists of a substitutional nitrogen atom, an adjacent vacancy, and six electrons coming from the nitrogen atom and three carbon atoms surrounding the vacancy. Its ground state is an electron-spin triplet with the splitting at $2.88$ GHz between the magnetic sublevels $|0\rangle$ ($|m_s=0\rangle$) and $|\pm1\rangle$ ($|m_s=\pm1\rangle$). There are six electronic excited states according to the Hamiltonian with the spin-orbit and spin-spin interactions and $C_{3v}$ symmetry \cite{NV4}. Optical transitions between the ground states and the excited states are spin preserving, while the electronic orbital angular momentum is changed by the photon polarization. The excited state $|A_2\rangle$, which is robust with the stable symmetric properties, decays with an equal probability to the ground states $|-1\rangle$ and $|+1\rangle$ through the $\sigma^+$ and $\sigma^-$ polarization radiations, respectively \cite{NV2} (see Fig.\ref{figure1}(b)). The excited state $|A_2\rangle$ has the form $|A_2\rangle=(|E_-\rangle|+1\rangle+|E_+\rangle|-1\rangle)/\sqrt{2}$ \cite{NV2}, where $|E_\pm\rangle$ are the orbit states with the angular momentum projections $\pm1$ along the NV axis (the $z$ axis in Fig.\ref{figure1}). The ground states are associated with the orbit state $|E_0\rangle$ with the angular momentum projection zero along the NV axis. \begin{figure} \caption{(Color online) The optical transitions of an NV center with circularly polarized lights. (a) A double-sided cavity-waveguide-NV-center system. (b) The optical transitions of an NV center. The photon in the state $|R^\uparrow\rangle$ or $|L^\downarrow\rangle$ corresponds to $\sigma^+$, and the photon in the state $|R^\downarrow\rangle$ or $|L^\uparrow\rangle$ corresponds to $\sigma^-$. $R^\uparrow$ ($R^\downarrow$) and $L^\uparrow$ ($L^\downarrow$) represent the right- and left- circularly polarized lights with their input (output) directions parallel (antiparallel) to the $z$ direction.} \label{figure1} \end{figure} The DIT of the cavity-NV-center system (shown in Fig.\ref{figure1}(a)) can be calculated by the Heisenberg equations of motion for the cavity field operator $\hat{a}$ and the dipole operator $\hat{\sigma}_-$ \cite{QD,QD11}, that is, \begin{eqnarray} \begin{split} \frac{d\hat{a}}{dt}\!=&\!-\left[i(\omega_c-\omega)+\eta+\frac{\kappa}{2}\right]\hat{a} -\sqrt{\eta}(\hat{a}_{in}+\hat{a}'_{in}) \\%\nonumber\\ &-g\hat{\sigma}_--\hat{h}, \\%\nonumber\\ \frac{d\hat{\sigma}_-}{dt}\!=&\!-\left[i(\omega_k-\omega)+\frac{\gamma}{2}\right]\hat{\sigma}_--g\hat{\sigma}_z\hat{a}-\hat{f}. \end{split} \end{eqnarray} Here, $\omega_k$ ($k=-1,+1$), $\omega$, and $\omega_c$ are the frequencies of the transition between $|-1\rangle$ ($|+1\rangle$) and $|A_2\rangle$, the waveguide channel mode, and the cavity mode, respectively. $g$ is the coupling strength of the cavity to the NV center. $\gamma/2$ is the decay rate of the emitter. $\eta$ and $\kappa/2$ are the decay rates of the cavity field into waveguide channel modes and cavity intrinsic loss modes, respectively. $\hat{g}$ and $\hat{f}$ are noise operators, which can preserve the commutation relation. The operators $\hat{a}_{in}$ ($\hat{a}'_{in}$) and $\hat{a}_{out}$ ($\hat{a}'_{out}$) are the input and output field operators, respectively. They satisfy the boundary relations $\hat{a}_{out}=\hat{a}_{in}+\sqrt{\eta}\,\hat{a}$ and $\hat{a}'_{out}=\hat{a}'_{in}+\sqrt{\eta}\,\hat{a}$. The decay rates of the cavity field into two waveguides can be set very close to get approximately the same fidelity for both directions ($\eta_1\cong\eta_2=\eta$) \cite{SC}. In the weak excitation limit with the emitter predominantly in the ground state ($\langle\sigma_z\rangle=-1$), the transmission and reflection coefficients of the cavity-NV-center system are given by \begin{eqnarray} \begin{split} t(\omega)&=\frac{-\eta[i(\omega_k-\omega)+\frac{\gamma}{2}]}{[i(\omega_k-\omega) +\frac{\gamma}{2}][i(\omega_c-\omega)+\eta+\frac{\kappa}{2}]+g^2},\\%\;\;\;\;\nonumber\\ r(\omega)&=1+t(\omega). \end{split} \end{eqnarray} \begin{figure} \caption{(Color online) The reflection and transition coefficients of the double-sided cavity-NV-center system vs the normalized frequency detuning $(\omega-\omega_0)/\eta$ ($\omega_c=\omega_k=\omega_0$). (a) $g=0$, $\gamma\sim2\pi\times80$ MHz \cite{CNV2} and $\eta=50\kappa\sim2\pi\times0.05$ THz ($Q\sim10^4$). (b) $g\sim2\pi\times0.06$ THz, $\gamma\sim2\pi\times80$ MHz, and $\eta=50\kappa\sim2\pi\times0.05$ THz. (c) $g\sim2\pi\times0.035$ THz, $\gamma\sim2\pi\times80$ MHz, and $\eta=50\kappa\sim2\pi\times0.5$ THz ($Q\sim10^3$).} \label{figure2} \end{figure} Considering the emitter is resonant with the cavity mode ($\omega_c=\omega_k=\omega$), the reflection and transmission coefficients are $t=-(2F_p+1+\frac{\lambda}{2})^{-1}$ and $r=(2F_p+\frac{\lambda}{2})/(2F_p+1+\frac{\lambda}{2})$ for $g>0$, and they are $t_0=-(1+\frac{\lambda}{2})^{-1}$ and $r_0=\frac{\lambda}{2}/(1+\frac{\lambda}{2})$ for $g=0$. Here $F_p=g^2/(\eta\gamma)$ is the Purcell factor ($\kappa\approx0$), and $\lambda=\kappa/\eta$. If the Purcell factor is $F_p\gg1$, the reflection and transmission coefficients are $r(\omega)\rightarrow1$ and $t(\omega)\rightarrow0$. If the cavity decay rate is $\lambda\ll1$, the reflection and transmission coefficients of the bare cavity are $r_0(\omega)\rightarrow0$ and $t_0(\omega)\rightarrow-1$ (Fig.\ref{figure2} (a)). The interaction between a single photon and the emitter in an NV center is obtained as \begin{eqnarray} \begin{split} |\sigma^+\rangle(|-1\rangle+|+1\rangle)\;\;\rightarrow\;\;& |\sigma^+_r\rangle|-1\rangle-|\sigma^+_{t_0}\rangle|+1\rangle,\\%\nonumber\\ |\sigma^-\rangle(|-1\rangle+|+1\rangle)\;\;\rightarrow\;\;&-|\sigma^-_{t_0}\rangle|-1\rangle+|\sigma^-_r\rangle|+1\rangle. \label{eq3} \end{split} \end{eqnarray} Here the subscript $r$ ($t_0$) represents the photon reflected (transmitted). In the strong-coupling (high-$Q$) regime, the dipole-induced reflection is the result of vacuum Rabi splitting with the Rabi frequency $\Omega=2g$, and the transmission (reflection) dip is equal to $2g$ (Fig.\ref{figure2}(b)). The incoming pulse must be longer than the Rabi oscillation period $1/g$ in this high-$Q$ regime \cite{QD}. In the weak-coupling (low-$Q$) regime, the dipole-induced reflection is caused by the destructive interference of the cavity field and the dipole emission field, and the transmission (reflection) dip is equal to $2\Gamma=2F_p\gamma/(1+\frac{\lambda}{2})$ (Fig.\ref{figure2}(c)). The incoming pulse must be longer than the Rabi oscillation period $1/\Gamma$ in this bad cavity regime \cite{QD}. In the weak excitation approximation, the time interval between two photons should be longer than $\Delta\tau=2F_p/[\gamma(1+\frac{\lambda}{2})]$. The transmission and reflection rule in Eq.(\ref{eq3}) can be described in the circular basis $\{|R\rangle, |L\rangle\}$ shown in Fig.\ref{figure1}(b). The photon circular polarization is usually related to the direction propagation, and the handedness circular polarized light is changed after reflection. That is, \begin{eqnarray} \begin{split} &|R^\uparrow,-1\rangle \;\rightarrow\; |L^\downarrow,-1\rangle,\;\;\;\;\;\; |R^\uparrow,+1\rangle \;\rightarrow\; -|R^\uparrow,+1\rangle,\\ &|L^\downarrow,-1\rangle \;\rightarrow\; |R^\uparrow,-1\rangle,\;\;\;\;\;\; |L^\downarrow,+1\rangle \;\rightarrow\; -|L^\downarrow,+1\rangle,\;\;\;\;\; \\ &|R^\downarrow,-1\rangle \;\rightarrow\; -|R^\downarrow,-1\rangle,\;\;\; |R^\downarrow,+1\rangle \;\rightarrow\; |L^\uparrow,+1\rangle,\\ &|L^\uparrow,-1\rangle \;\rightarrow\; -|L^\uparrow,-1\rangle,\;\;\;\; |L^\uparrow,+1\rangle \;\rightarrow\; |R^\downarrow,+1\rangle.\label{eq4} \end{split} \end{eqnarray} Here, in the left-hand side of "$\rightarrow$" in Eq.(\ref{eq4}), $|R^\uparrow\rangle$ ($|L^\uparrow\rangle$) represents that the photon $R$ ($L$) is put into the cavity-NV-center system through the down spatial mode of the cavity-NV-center system, and $|R^\downarrow\rangle$ ($|L^\downarrow\rangle$) represents that the photon $R$ ($L$) is put into the cavity-NV-center system through the upper spatial mode of the cavity-NV-center system. In the right-hand side of "$\rightarrow$" in Eq.(\ref{eq4}), $|R^\uparrow\rangle$ ($|L^\uparrow\rangle$) represents that the photon $R$ ($L$) exits from the cavity-NV-center system through the upper spatial mode of the cavity-NV-center system, and $|R^\downarrow\rangle$ ($|L^\downarrow\rangle$) represents that the photon $R$ ($L$) exits from the cavity-NV-center system through the down spatial mode of the cavity-NV-center system. $i_1$ and $i_2$ represent the two spatial modes of photon $i$ ($i=a,b$) as shown in Fig.\ref{figure3}. \begin{figure} \caption{(Color online) Schematic diagram for a hybrid photonic hyper-CNOT gate operating on a two-photon system in both the spatial-mode and polarization DOFs. $X$ represents a half-wave plate which is used to perform a polarization bit-flip operation $X=|R\rangle\langle L|+|L\rangle\langle R|$. $Z_n$ ($n=1,2$) represents a half-wave plate which is used to perform a polarization phase-flip operation $Z=|R\rangle\langle R|-|L\rangle\langle L|$. $U$ represents a wave plate which is used to perform a polarization phase-flip operation $U=-|R\rangle\langle R|-|L\rangle\langle L|$. CPBS$_m$ ($m=1,2,3$) represents a polarizing beam splitter in the circular basis, which transmits the photon in right-circular polarization $\vert R\rangle$ and reflects the photon in left-circular polarization $\vert L\rangle$, respectively. $i_{k1}$ and $i_{k2}$ represent the two spatial modes of photon $i$ ($i=a,b$), respectively. NV$_k$ ($k=1,2$) represents a double-sided cavity-NV-center system. An optical switch is used in the merging point of $i_{kl}$ and $j_{kl}$. } \label{figure3} \end{figure} \section{Hybrid photonic hyper-CNOT gate on a two-photon system}\label{sec3} Here, a hybrid photonic hyper-CNOT gate on a two-photon system in both the polarization and spatial-mode DOFs is used to complete the task that a bit-flip operation is performed on the spatial mode of photon $b$ (the target qubit) when the polarization of photon $a$ (the control qubit) is in the state $\vert L\rangle$, and simultaneously a bit-flip operation takes place on the spatial mode of photon $a$ when the polarization of photon $b$ is in the state $\vert L\rangle$. It can act as two cascaded hybrid CNOT gates on a four-photon system in one DOF with less operation time and less resources, which is far different from the hybrid CNOT gate based on one DOF of photon systems \cite{oneDOF}. The principle of our hybrid photonic hyper-CNOT gate is shown in Fig.\ref{figure3}, where two identical quantum circuits are required. We describe it in detail as follows. Suppose that the initial states of the two NV centers are $|+\rangle_{e_1}$ and $|+\rangle_{e_2}$, respectively, and the initial states of the two photons $a$ and $b$ are \begin{eqnarray} \begin{split} |\psi_a\rangle_0&\;=\;(\alpha_1|R\rangle+\alpha_2|L\rangle)_a(\gamma_1|a_1\rangle+\gamma_2|a_2\rangle),\\ |\psi_b\rangle_0&\;=\;(\beta_1|R\rangle+\beta_2|L\rangle)_b(\delta_1|b_1\rangle+\delta_2|b_2\rangle). \end{split} \end{eqnarray} Here $|\pm\rangle=\frac{1}{\sqrt{2}}(|-1\rangle\pm|+1\rangle)$. First, we perform the Hadamard operations on the polarization DOF of both photons $a$ and $b$, and the states of the two photons $a$ and $b$ become $|\psi'_a\rangle_0=(\alpha'_1|R\rangle+\alpha'_2|L\rangle)_a(\gamma_1|a_1\rangle+\gamma_2|a_2\rangle)$ and $ |\psi'_b\rangle_0=(\beta'_1|R\rangle+\beta'_2|L\rangle)_b(\delta_1|b_1\rangle+\delta_2|b_2\rangle)$. Here, $\alpha'_1=\frac{1}{\sqrt{2}}(\alpha_1+\alpha_2)$, $\alpha'_2=\frac{1}{\sqrt{2}}(\alpha_1-\alpha_2)$, $\beta'_1=\frac{1}{\sqrt{2}}(\beta_1+\beta_2)$, and $\beta'_2=\frac{1}{\sqrt{2}}(\beta_1-\beta_2)$. The Hadamard operation on the polarization DOF of a photon is used to implement the unitary single-qubit operation $\vert R\rangle \rightarrow \frac{1}{\sqrt{2}}(\vert R\rangle + \vert L\rangle)$ and $\vert L\rangle \rightarrow \frac{1}{\sqrt{2}}(\vert R\rangle - \vert L\rangle)$. Subsequently, we lead the two wavepackets of photon $a$ ($b$) from the two spatial modes $|a_1\rangle$ ($|b_1\rangle$) and $|a_2\rangle$ ($|b_2\rangle$) to spatial ports $i_{11}$ ($i_{21}$) and $i_{12}$ ($i_{22}$) of the cavity-NV-center system NV$_1$ (NV$_2$) as shown in Fig.\ref{figure3}. After photon $a$ ($b$) passes through CPBS$_1$, NV$_1$ (NV$_2$), CPBS$_2$, and $U$, the state of the quantum system composed of photon $a$ ($b$) and NV$_1$ (NV$_2$) is transformed from $|\Psi'_{ae_1}\rangle_0\equiv|\psi'_a\rangle_0\otimes|+\rangle_{e_1}$ ($|\Psi'_{be_2}\rangle_0\equiv|\psi'_b\rangle_0\otimes|+\rangle_{e_2}$) to $|\Psi_{ae_1}\rangle_1$ ($|\Psi_{be_2}\rangle_1$). Here \begin{eqnarray} \begin{split} |\Psi_{ae_1}\rangle_1\;=\;&\frac{1}{\sqrt{2}}\{\gamma_1[|-1\rangle_{e_1}(\alpha'_1|R\rangle+\alpha'_2|L\rangle)_a \\ &-|+1\rangle_{e_1}(\alpha'_2|R\rangle+\alpha'_1|L\rangle)_a]|a_1\rangle \\ &+\gamma_2[|-1\rangle_{e_1}(\alpha'_2|R\rangle+\alpha'_1|L\rangle)_a \\ &-|+1\rangle_{e_1}(\alpha'_1|R\rangle+\alpha'_2|L\rangle)_a]|a_2\rangle\},\\ |\Psi_{be_2}\rangle_1\;=\;&\frac{1}{\sqrt{2}}\{\delta_1[|-1\rangle_{e_2}(\beta'_1|R\rangle+\beta'_2|L\rangle)_b \\ &-|+1\rangle_{e_2}(\beta'_2|R\rangle+\beta'_1|L\rangle)_b]|b_1\rangle \\ &+\delta_2[|-1\rangle_{e_2}(\beta'_2|R\rangle+\beta'_1|L\rangle)_b \\ &-|+1\rangle_{e_2}(\beta'_1|R\rangle+\beta'_2|L\rangle)_b]|b_2\rangle\}. \end{split} \end{eqnarray} Second, after a Hadamard operation is performed on NV$_1$ (NV$_2$), we let photon $a$ ($b$) pass through two spatial paths $j_{21}$ ($j_{11}$) and $j_{22}$ ($j_{12}$) of the cavity-NV-center system NV$_2$ (NV$_1$) shown in Fig.\ref{figure3} (with optical switches). Here a Hadamard operation on an NV center is used to complete the transformations $\vert -1\rangle \rightarrow \vert +\rangle$ and $\vert +1\rangle \rightarrow \vert -\rangle$. After photon $a$ ($b$) passes through NV$_2$ (NV$_1$), $X$, CPBS$_3$, $Z_1$, and $Z_2$, the state of the quantum system composed of photons $a$ and $b$, NV$_1$, and NV$_2$ is changed from $|\Psi_{abe_1e_2}\rangle_1\equiv|\Psi_{ae_1}\rangle_1\otimes|\Psi_{be_2}\rangle_1$ to \begin{eqnarray} |\Psi_{abe_1e_2}\rangle_2\!\!&=&\!\!\frac{1}{2}[|-1\rangle_{e_1}\alpha_2(|L\rangle-|R\rangle)_a(\delta_2|b_1\rangle+\delta_1|b_2\rangle)\nonumber\\ &&-|+1\rangle_{e_1}\alpha_1(|R\rangle+|L\rangle)_a(\delta_1|b_1\rangle+\delta_2|b_2\rangle)]\nonumber\\ &&\otimes[|-1\rangle_{e_2}\beta_2(|L\rangle-|R\rangle)_b(\gamma_2|a_1\rangle+\gamma_1|a_2\rangle)\nonumber\\ &&-|+1\rangle_{e_2}\beta_1(|R\rangle+|L\rangle)_b(\gamma_1|a_1\rangle+\gamma_2|a_2\rangle)].\nonumber\\ \end{eqnarray} At last, with the Hadamard operations performed on NV$_1$, NV$_2$, and the polarization DOF of photons $a$ and $b$ again, the outcome of a hybrid photonic hyper-CNOT gate can be obtained by measuring the two NV centers in the orthogonal basis $\{|-1\rangle,|+1\rangle\}$ and performing conditional phase shift operations on the polarization modes of photons $a$ and $b$. After we perform an additional sign change $|L\rangle_a\rightarrow-|L\rangle_a$ on photon $a$ when NV$_1$ is in the state $|+1\rangle_{e_1}$ and an addition sign change $|L\rangle_b\rightarrow-|L\rangle_b$ on photon $b$ when NV$_2$ is in the state $|+1\rangle_{e_2}$, the state of the two-photon system $ab$ becomes \begin{eqnarray} \label{eq.7} |\psi_{ab}\rangle \!\!&=&\!\! [\alpha_1|R\rangle_a(\delta_1|b_1\rangle \!+\! \delta_2|b_2\rangle) \!+\! \alpha_2|L\rangle_a(\delta_2|b_1\rangle\!+\!\delta_1|b_2\rangle) ]\nonumber\\ &&\!\!\otimes [\beta_1|R\rangle_b(\gamma_1|a_1\!\rangle\!+\!\gamma_2|a_2\!\rangle)\!+\!\beta_2|L\!\rangle_b(\gamma_2|a_1\!\rangle\!+\!\gamma_1|a_2\!\rangle)].\nonumber\\ \end{eqnarray} It is the result of a hybrid photonic hyper-CNOT gate operating on a two-photon system, by using the polarization mode of one photon as the control qubit and the spatial mode of the other photon as the target qubit, respectively. \begin{figure*}\label{figure4} \end{figure*} \section{Hybrid photonic hyper-Toffoli gate on a three-photon system}\label{sec4} A Toffoli gate is used to complete a bit-flip operation on the state of the target qubit when both two control qubits are in the state $\vert 1\rangle$; otherwise, nothing is done on the target qubit \cite{QC}. It is a universal quantum gate for quantum computing. Here, the hybrid hyper-Toffoli gate, operating on a three-photon system $abc$ in both the polarization and spatial-mode DOFs, is used to achieve the task that a bit-flip operation is performed on the spatial mode of photon $c$ (the target qubit) when the polarizations of both photons $a$ and $b$ (the control qubits) are $\vert L\rangle$, and simultaneously a bit-flip operation takes place on the spatial mode of photon $b$ (the target qubit) when the spatial mode of photon $a$ is $\vert a_2\rangle$ and the polarization of photon $c$ is $\vert L\rangle$ (the control qubits). The two parts of the quantum circuit for our hybrid photonic hyper-Toffoli gate are shown in Fig.\ref{figure4} and Fig.\ref{figure5}, respectively. Suppose that the initial states of the two NV centers are $|+\rangle_{e_1}$ and $|+\rangle_{e_2}$, respectively, and the initial states of three photons $a$, $b$, and $c$ are $|\phi_a\rangle_0=(\alpha_1|R\rangle+\beta_1|L\rangle)_a(\gamma_1|a_1\rangle+\delta_1|a_2\rangle)$, $|\phi_b\rangle_0=(\alpha_2|R\rangle+\beta_2|L\rangle)_b(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)$, and $|\phi_c\rangle_0=(\alpha_3|R\rangle+\beta_3|L\rangle)_c(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)$, respectively. This hyper-Toffoli gate can be constructed with two steps described in detail below. The principle of the first step for our hybrid photonic hyper-Toffoli gate is shown in Fig.\ref{figure4}. First, the wave packets of photon $a$ from the two spatial modes $|a_1\rangle$ and $|a_2\rangle$ are led to CPBS, $X$, $H_P$, CPBS, NV$_1$, CPBS, $U$, $H_P$, $X$, and CPBS in sequence as shown in Fig.\ref{figure4}, and the state of the quantum system composed of photon $a$ and NV$_1$ is transformed from $|\Phi_{ae_1}\rangle_0\equiv |\phi_a\rangle_0\otimes|+\rangle_{e_1}$ to \begin{eqnarray} |\Phi_{ae_1}\rangle_1\!\!&=&\!\!(\alpha_1|R\rangle_a|+\rangle_{e_1}+\beta_1|L\rangle_a|-\rangle_{e_1})(\gamma_1|a_1\rangle+\delta_1|a_2\rangle).\nonumber\\ \end{eqnarray} Subsequently, we lead the wave packet of photon $a$ from the spatial mode $|a_2\rangle$ to BS, $X$, NV$_2$, $X$, CPBS, $Z$, and BS in sequence as shown in Fig.\ref{figure4}, and the state of the quantum system composed of photon $a$, NV$_1$, and NV$_2$ is transformed from $|\Phi_{ae_1e_2}\rangle_1\equiv|\Phi_{ae_1}\rangle_1\otimes|+\rangle_{e_2}$ to \begin{eqnarray} \begin{split} |\Phi_{ae_1e_2}\rangle_2\;=\;&(\alpha_1|R\rangle_a|+\rangle_{e_1}+\beta_1|L\rangle_a|-\rangle_{e_1}) \\ &\otimes(\gamma_1|a_1\rangle|+\rangle_{e_2}+\delta_1|a_2\rangle|-\rangle_{e_2}). \end{split} \end{eqnarray} In the second step, after a Hadamard operation is performed on each of NV$_1$ and NV$_2$, the two wave packets of photon $b$ from the two spatial modes $|b_1\rangle$ and $|b_2\rangle$ are led to CPBSs, NV$_1$, CPBS, $X$, and $U$ in sequence as shown in Fig.\ref{figure5}, and the state of the quantum system composed of NV$_1$, NV$_2$, and photons $a$ and $b$ is transformed from $|\Phi_{abe_1e_2}\rangle_2=|\Phi_{ae_1e_2}\rangle_2\otimes|\phi_b\rangle_0$ to \begin{eqnarray} |\Phi_{abe_1e_2}\rangle_3\!\!&=&\!\!(\alpha_1\beta_2|-1\rangle_{e_1}|RL\rangle_{ab}+\beta_1\beta_2|+1\rangle_{e_1}|LL\rangle_{ab}\nonumber\\ &&\!\!-\alpha_1\alpha_2|-1\rangle_{e_1}|RL\rangle_{ab}+\beta_1\alpha_2|+1\rangle_{e_1}|LR\rangle_{ab})\nonumber\\ &&\!\!\otimes(\gamma_1|a_1\rangle|-1\rangle_{e_2}+\delta_1|a_2\rangle|+1\rangle_{e_2})\nonumber\\ &&\!\!\otimes(\gamma_2|b_1\rangle+\delta_2|b_2\rangle). \end{eqnarray} After a Hadamard operation is performed on NV$_1$, the two wave packets of photon $b$ from the two spatial modes $|b_1\rangle$ and $|b_2\rangle$ are led to $H_P$, CPBS, NV$_1$, CPBS, $X$, and $U$ in sequence with an optical switch $S$ (through the dotted line in Fig.\ref{figure5}), and then the state of the quantum system composed of NV$_1$, NV$_2$, and photons $a$ and $b$ becomes \begin{eqnarray} |\Phi_{abe_1e_2}\rangle_4\!\!&=&\!\![\alpha_1\beta_2|+\rangle_{e_1}|RL\rangle_{ab}+\beta_1\beta_2|-\rangle_{e_1}|LL\rangle_{ab}\nonumber\\ &&-\frac{\alpha_1\alpha_2}{\sqrt{2}}|+\rangle_{e_1}|R(R-L)\rangle_{ab}\nonumber\\ &&-\frac{\beta_1\alpha_2}{\sqrt{2}}|+\rangle_{e_1}|L(R+L)\rangle_{ab}]\nonumber\\ &&\otimes(\gamma_1|a_1\rangle|-1\rangle_{e_2}+\delta_1|a_2\rangle|+1\rangle_{e_2})\nonumber\\ &&\otimes(\gamma_2|b_1\rangle+\delta_2|b_2\rangle). \end{eqnarray} Next, after another Hadamard operation is performed on NV$_1$, we lead the wave packets of photon $c$ from the two spatial modes $|c_1\rangle$ and $|c_2\rangle$ to $X$, $U$, NV$_1$, $U$, $X$, and CPBS in sequence as shown in Fig.\ref{figure5}. The quantum system composed of NV$_1$, NV$_2$, and photons $a$, $b$, and $c$ is evolved from $|\Phi_{abce_1e_2}\rangle_4\equiv |\Phi_{abe_1e_2}\rangle_4\otimes|\phi_c\rangle_0$ to \begin{eqnarray} |\Phi_{abce_1e_2}\rangle_5 \!\!&=&\!\! [\alpha_1\beta_2|-1\rangle_{e_1}|RL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\beta_1\beta_2|+1\rangle_{e_1}|LL\rangle_{ab}(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)\nonumber\\ &&\!\!-\frac{\alpha_1\alpha_2}{\sqrt{2}}|-1\rangle_{e_1}|R(R-L)\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!-\frac{\beta_1\alpha_2}{\sqrt{2}}|-1\rangle_{e_1}|L(R+L)\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\!\otimes(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)(\gamma_1|a_1\rangle|-1\rangle_{e_2}\nonumber\\ &&\!\!+\delta_1|a_2\rangle|+1\rangle_{e_2})(\alpha_3|R\rangle+\beta_3|L\rangle)_c. \end{eqnarray} Subsequently, the wave packets of photon $b$ from the two spatial modes $|b_1\rangle$ and $|b_2\rangle$ are led to CPBS, NV$_1$, CPBS, $X$, and $U$ (through the dash-dot-dotted line in Fig.\ref{figure5}) after a Hadamard operation is performed on NV$_1$, and then the wavepackets of photon $c$ from the two spatial modes $|c_1\rangle$ and $|c_2\rangle$ are led to CPBSs, NV$_2$, CPBS, $X$, and $U$ in sequence as shown in Fig.\ref{figure5}. The state of the system $abce_1e_2$ is transformed from $|\Phi_{abce_1e_2}\rangle_5$ to \begin{eqnarray} |\Phi_{abce_1e_2}\rangle_6\!\!&=&\!\![\alpha_1\beta_2|+\rangle_{e_1}|RL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\beta_1\beta_2|-\rangle_{e_1}|LL\rangle_{ab}(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)\nonumber\\ &&\!\!-\frac{\alpha_1\alpha_2}{\sqrt{2}}|+\rangle_{e_1}|R(R-L)\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\frac{\beta_1\alpha_2}{\sqrt{2}}|-\rangle_{e_1}|L(R+L)\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\!\otimes(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)(|\!-\!1\rangle_{e_2}\gamma_1\beta_3|a_1L\rangle_{ac}\nonumber\\ &&\!\!+|\!+\!1\rangle_{e_2}\delta_1\beta_3|a_2L\rangle_{ac}-|\!-\!1\rangle_{e_2}\gamma_1\alpha_3|a_1L\rangle_{ac}\nonumber\\ &&\!\!+|\!+\!1\rangle_{e_2}\delta_1\alpha_3|a_2R\rangle_{ac}). \end{eqnarray} \begin{figure*} \caption{(Color online) Schematic diagram for the second step of our hybrid photonic hyper-Toffoli gate operating on both the spatial-mode and polarization DOFs of a three-photon system. $S$ represents an optical switch.} \label{figure5} \end{figure*} After the Hadamard operations are performed on NV$_1$ and NV$_2$, we lead the wave packets of photon $b$ from the two spatial modes $|b_1\rangle$ and $|b_2\rangle$ to $H_P$, CPBS, NV$_1$, CPBS, $X$, and $U$ again (through the dotted line in Fig.\ref{figure5}). And we lead those from the two spatial modes $|c_1\rangle$ and $|c_2\rangle$ to $H_P$, CPBS, NV$_2$, CPBS, $X$, and $U$ (through the dotted line in Fig.\ref{figure5}), and then the state of the quantum system becomes \begin{eqnarray} |\Phi_{abce_1e_2}\rangle_7\!\!&=&\!\![\alpha_1\beta_2|-1\rangle_{e_1}|RL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\beta_1\beta_2|+1\rangle_{e_1}|LL\rangle_{ab}(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)\nonumber\\ &&\!\!+\alpha_1\alpha_2|-1\rangle_{e_1}|RR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\beta_1\alpha_2|+1\rangle_{e_1}|LR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\!\otimes[\gamma_1\beta_3|+\rangle_{e_2}|a_1L\rangle_{ac}+\delta_1\beta_3|-\rangle_{e_2}|a_2L\rangle_{ac}\nonumber\\ &&\!\!-\frac{\gamma_1\alpha_3}{\sqrt{2}}|+\rangle_{e_2}|a_1(R-L)\rangle_{ac}\nonumber\\ &&\!\!-\frac{\delta_1\alpha_3}{\sqrt{2}}|+\rangle_{e_2}|a_2(R+L)\rangle_{ac}]\nonumber\\ &&\!\!\otimes(\gamma_2|b_1\rangle+\delta_2|b_2\rangle). \end{eqnarray} After a Hadamard operation is performed on NV$_2$, we lead the wave packets of photon $b$ from the two spatial modes $|b_1\rangle$ and $|b_2\rangle$ to CPBS, $X$, $U$, NV$_2$, $U$, $X$, and CPBS in sequence as shown in Fig.\ref{figure5}. The state of the quantum system becomes \begin{eqnarray} |\Phi_{abce_1e_2}\rangle_8\!\!&=&\!\![\alpha_1\beta_2|-1\rangle_{e_1}|RL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\beta_1\beta_2|+1\rangle_{e_1}|LL\rangle_{ab}(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)\nonumber\\ &&\!\!+\alpha_1\alpha_2|-1\rangle_{e_1}|RR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+\beta_1\alpha_2|+1\rangle_{e_1}|LR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\!\otimes[\gamma_1\beta_3|-1\rangle_{e_2}|a_1L\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)\nonumber\\ &&\!\!+\delta_1\beta_3|+1\rangle_{e_2}|a_2L\rangle_{ac}(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)\nonumber\\ &&\!\!-\frac{\gamma_1\alpha_3}{\sqrt{2}}|\!-\!1\rangle_{e_2}|a_1(R\!-\!L)\rangle_{ac}(\gamma_2|b_2\rangle\!+\!\delta_2|b_1\rangle)\nonumber\\ &&\!\!-\frac{\delta_1\alpha_3}{\sqrt{2}}|\!-\!1\rangle_{e_2}|a_2(R\!+\!L)\rangle_{ac}(\gamma_2|b_2\rangle\!+\!\delta_2|b_1\rangle)].\nonumber\\ \end{eqnarray} Next, after a Hadamard operation is performed on NV$_2$, we lead the wavep ackets from the two spatial modes $|c_1\rangle$ and $|c_2\rangle$ to CPBS, NV$_2$, CPBS, $X$, and $U$ (through the dash-dot-dotted line in Fig.\ref{figure5}). The state of the quantum system becomes \begin{eqnarray} |\Phi_{abce_1e_2}\rangle_9\!\!&=&\!\![|-1\rangle_{e_1}\alpha_1\beta_2|RL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+|+1\rangle_{e_1}\beta_1\beta_2|LL\rangle_{ab}(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)\nonumber\\ &&\!\!+|-1\rangle_{e_1}\alpha_1\alpha_2|RR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+|+1\rangle_{e_1}\beta_1\alpha_2|LR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\!\otimes[|+\rangle_{e_2}\gamma_1\beta_3|a_1L\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)\nonumber\\ &&\!\!+|-\rangle_{e_2}\delta_1\beta_3|a_2L\rangle_{ac}(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)\nonumber\\ &&\!\!-\frac{\gamma_1\alpha_3}{\sqrt{2}}|+\rangle_{e_2}|a_1(R-L)\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)\nonumber\\ &&\!\!+\frac{\delta_1\alpha_3}{\sqrt{2}}|-\rangle_{e_2}|a_2(R+L)\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)].\nonumber\\%\nonumber\\ \end{eqnarray} After another Hadamard operation is performed on NV$_2$ again, we put the wave packets of photon $c$ from the two spatial modes $|c_1\rangle$ and $|c_2\rangle$ into $H_P$, CPBS, NV$_2$, CPBS, $X$, $U$, and CPBS (through the dotted line in Fig.\ref{figure5}), and the state of the quantum system becomes \begin{eqnarray} |\Phi_{abce_1e_2}\rangle_{10}\!\!&=&\!\![|-1\rangle_{e_1}\alpha_1\beta_2|RL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+|+1\rangle_{e_1}\beta_1\beta_2|LL\rangle_{ab}(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)\nonumber\\ &&\!\!+|-1\rangle_{e_1}\alpha_1\alpha_2|RR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)\nonumber\\ &&\!\!+|+1\rangle_{e_1}\beta_1\alpha_2|LR\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\!\otimes[|-1\rangle_{e_2}\gamma_1\beta_3|a_1L\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)\nonumber\\ &&\!\!+|+1\rangle_{e_2}\delta_1\beta_3|a_2L\rangle_{ac}(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)\nonumber\\ &&\!\!+|-1\rangle_{e_2}\gamma_1\alpha_3|a_1R\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)\nonumber\\ &&\!\!+|+1\rangle_{e_2}\delta_1\alpha_3|a_2R\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)].\nonumber\\ \end{eqnarray} At last, we perform a Hadamard operation on each of NV$_1$ and NV$_2$, and then the spatial-mode bit-flip operations are performed on photons $b$ and $c$. By measuring the states of NV$_1$ and NV$_2$ with the orthogonal basis $\{|-1\rangle, |+1\rangle\}$, the outcome of the hybrid hyper-Toffoli gate on a three-photon system can be obtained by performing the conditional operations on photon $a$. If NV$_1$ is projected into the state $|+1\rangle_{e_1}$, a polarization operation $|L\rangle_a\rightarrow -|L\rangle_a$ is performed on photon $a$. If NV$_2$ is projected into the state $|+1\rangle_{e_2}$, a spatial-mode operation $|a_2\rangle\rightarrow -|a_2\rangle$ is performed on photon $a$. In this way, the state of the three-photon system $abc$ becomes \begin{eqnarray} |\Phi_{abc}\rangle\!\!&=&\!\![(\alpha_1\beta_2|RL\rangle_{ab}+\alpha_1\alpha_2|RR\rangle_{ab}+\beta_1\alpha_2|LR\rangle_{ab})\nonumber\\ &&\!\!(\gamma_3|c_1\rangle+\delta_3|c_2\rangle)+\beta_1\beta_2|LL\rangle_{ab}(\gamma_3|c_2\rangle+\delta_3|c_1\rangle)]\nonumber\\ &&\!\![(\gamma_1\beta_3|a_1L\rangle_{ac}+\gamma_1\alpha_3|a_1R\rangle_{ac}+\delta_1\alpha_3|a_2R\rangle_{ac})\nonumber\\ &&\!\!(\gamma_2|b_1\rangle+\delta_2|b_2\rangle)+\delta_1\beta_3|a_2L\rangle_{ac}(\gamma_2|b_2\rangle+\delta_2|b_1\rangle)].\nonumber\\ \end{eqnarray} This is the result of the hybrid hyper-Toffoli gate operating on the three-photon system $abc$. \section{Discussion and Summary} \label{sec5} An NV center in diamond is an appropriate dipole emitter in cavity QED to obtain the high-fidelity reflection-transmission property in the Purcell regime, with its long spin coherence time ($\sim$ms) \cite{NV3,NV5} and nanosecond manipulation time \cite{NV6}. When a diamond NV center is coupled to a micro- or nano- cavity, the spontaneous emission of dipole emitter into the zero-phonon line can be greatly enhanced, and the interaction of the NV center and the photon is also enhanced \cite{CNV2,CNV5}. There are many experimental demonstrations of diamond NV centers coupled to micro- or nano- resonators with either a strong-coupling strength \cite{CNV3} or a weak-coupling one \cite{CNV4}. In 2012, Faraon \emph{et al.} \cite{CNV5} demonstrated experimentally that the zero-phonon transition rate of an NV center is greatly enhanced ($\sim70$) by coupling to a photonic crystal resonator ($Q\sim3000$) fabricated in a monocrystalline diamond with the coupling strength as a few GHz, and they pointed out that the photonic crystal platform with a quality factor of $Q\sim10^5$ can operate at the onset of strong-coupling regime. \begin{figure} \caption{(Color online) Fidelity ($F$) of a hybrid spatial-polarization hyper-CNOT gate vs the Purcell factor $F_P$ and the cavity decay rate $\lambda$.} \label{figure6} \end{figure} In the double-sided cavity-NV-center system, the two waveguides are simultaneously coupled to the cavity resonator mode with the coupling constants $\eta_1$ and $\eta_2$, respectively. As the backscattering is low in the waveguide, the asymmetry of the two coupling constants is mainly caused by cavity intrinsic loss $\kappa$ \cite{FOS}. In experiment, the difference of the two coupling constants $\Delta\eta\sim0.2\eta$ has been demonstrated, which yields approximately the same fidelity for both transmission and reflection directions \cite{FOS}. The reflection and transmission coefficients of a double-sided cavity-NV-center system are dominated by the Purcell factor $F_P$ and the cavity decay rate $\lambda=\kappa/\eta$. In the resonant condition ($\omega_c=\omega_k=\omega$), the transmission and reflection rule for a handedness circularly polarized light can be described as \begin{eqnarray} \label{eq16} \begin{split} |R^\uparrow, -1\rangle \;\; \rightarrow \;\;& |rL^\downarrow+tR^\uparrow, -1\rangle, \\ |L^\downarrow, -1\rangle \;\; \rightarrow \;\;& |rR^\uparrow+tL^\downarrow, -1\rangle, \\ |R^\downarrow, +1\rangle \;\; \rightarrow \;\;& |rL^\uparrow+tR^\downarrow, +1\rangle, \\ |L^\uparrow, +1\rangle \;\; \rightarrow \;\;& |rR^\downarrow+tL^\uparrow, +1\rangle, \\ |R^\downarrow, -1\rangle \;\; \rightarrow \;\;& |t_0R^\downarrow+r_0L^\uparrow, -1\rangle, \\ |L^\uparrow, -1\rangle \;\; \rightarrow \;\;& |t_0L^\uparrow+r_0R^\downarrow, -1\rangle, \\ |R^\uparrow, +1\rangle \;\; \rightarrow \;\;& |t_0R^\uparrow+r_0L^\downarrow, +1\rangle, \\ |L^\downarrow, +1\rangle \;\; \rightarrow \;\;& |t_0L^\downarrow+r_0R^\uparrow, +1\rangle. \end{split} \end{eqnarray} The fidelity of the photonic quantum logic gate can be calculated by $F=\overline{|\langle\psi_f|\psi\rangle|^2}$, where $|\psi\rangle$ is the ideal finial state of a quantum logical gate, and $|\psi_f\rangle$ is the finial state of a quantum system by considering experimental factors ($\alpha_i, \beta_i, \gamma_i, \delta_i \in[0,1]$). The fidelity of our hybrid photonic hyper-CNOT gate is shown in Fig.\ref{figure6}, which is decreased with a small Purcell factor or a large cavity intrinsic loss. In Fig.\ref{figure6}, the fidelity of the hybrid photonic hyper-CNOT gate is higher with a small Purcell factor when the cavity intrinsic loss becomes larger, which corresponds to the regime $|r|\simeq|t_0|$. That is, the fidelity of the hybrid photonic hyper-CNOT gate is higher when the reflection-transmission properties of the uncoupled cavity and the coupled cavity are symmetrical. In the case $|r|=|t_0|$, the relation of the Purcell factor and the cavity decay rate is $F_P=(1-\frac{\lambda^2}{4})/\lambda$. \begin{figure} \caption{(Color online) Fidelity ($F$) of a hybrid spatial-polarization hyper-CNOT gate on a two-photon system (red dashed line) and that of two identical polarization CNOT gates on a four-photon system (blue dotted line) vs the Purcell factor $F_P$. Here the cavity decay rate is chosen as $\lambda=0.1$, and the construction of the polarization CNOT gate is the same as that of the polarization part of the hyper-CNOT gate in Ref. \cite{QD4}.} \label{figure7} \end{figure} \begin{figure} \caption{(Color online) Fidelity ($F$) of a hybrid spatial-polarization hyper-Toffoli gate vs the cavity decay rate $\lambda$ in the case $|r|=|t_0|$ [$F_P=(1-\frac{\lambda^2}{4})/\lambda$], which is equal to the one for two polarization Toffoli gates on a six-photon system. } \label{figure8} \end{figure} The probability of recovering an incident photon after operation is increased with a large cavity decay rate $\eta$ \cite{FOS}. O'Shea \emph{et al.} \cite{FOS} noted that the maximal fidelity of the operation with cavity QED is achieved at the regime where the coupling strength $g$ is smaller than the cavity decay rate $\eta$ ($F_P>1$) rather than the strong-coupling regime, and the maximal fidelity is obtained at the point $\lambda\simeq0.1$ in their experiment. In the case $\lambda=0.1$, the fidelity of our hybrid photonic hyper-CNOT gate on a two-photon system and that of the two identical CNOT gates on a four-photon system in one DOF are shown in Fig.\ref{figure7}. It shows that the fidelity of our hybrid photonic hyper-CNOT gate is higher than the one for the two CNOT gates in one DOF in the weak-coupling regime with a small Purcell factor. In the case $|r|=|t_0|$, the fidelity of the hybrid hyperparallel photonic logic gate is equal to the one for the two identical photonic logic gates in one DOF (e.g., the fidelity of the hybrid hyper-Toffoli gate shown in Fig.\ref{figure8}). In the case $\lambda=0.1$ and $F_P=9.875$, both the fidelity of the hybrid photonic hyper-CNOT gate and that of the two identical CNOT gates in one DOF are $F=99.7\%$. That is, the hybrid hyperparallel photonic logic gate can decrease the effect of environment noise in the asymmetric condition of the double-sided cavity-NV-center system in the weak-coupling regime with a small Purcell factor. The reflection property of one-sided dipole-cavity protocols is fragile because the reflectance for the uncoupled cavity and the coupled cavity should be balanced to get a high fidelity, while the reflection-transmission property of double-sided dipole-cavity systems is robust and flexible with the large reflectance and transmittance difference between the uncoupled cavity and the coupled cavity \cite{QD,QD4}. Moreover, a double-sided dipole-cavity system has two spatial modes, so it is very convenient to use this DIT to investigate the robust and flexible quantum information processing based on the polarization and spatial-mode DOFs of photon systems. Both CONT and Toffoli gates are parts of the set of universal quantum logic gates, and they can form universal quantum computing with the assistance of single-qubit rotation gates \cite{QC}. Both our hybrid polarization-spatial hyper-CNOT gate and hyper-Toffoli gate can form universal hyperparallel photonic quantum computing assisted by the rotations on a single photon in two DOFs, which is useful in the quantum information protocols with multi-qubit systems in several DOFs. For example, hyperentanglement is useful in quantum communication protocols for increasing the channel capacity \cite{HQC}, resorting to the entanglement in several DOFs of photon systems \cite{heper1}. With hyperparallel quantum gates, the generation and complete analysis of hyperentangled states can be achieved in a relatively simpler way, compared with the protocols with several cascaded quantum entangling gates \cite{multiqubit2,multiqubit3,multiqubit4}. Besides, some quantum information processes can be implemented with less resources based on several DOFs of photon systems, resorting to the hyperparallel quantum gates. For example, in the preparation of four-qubit cluster states, only a hyper-CNOT gate operation (photons interact with electron spins four times) and a wave plate are required in the protocol with two photons in two DOFs \cite{multiqubit6}, while three CNOT gate operations (photons interact with electron spins six times) are required in the protocol with four photons in one DOF. In summary, we have presented the DIT of a double-sided cavity-NV-center system, which is still obvious in the weak-coupling regime. The reflection-transmission property of circularly polarized light interacting with a double-sided cavity-NV-center system can be used for photon-photon interaction in quantum information processing based on both the polarization and spatial-mode DOFs. With the DIT of double-sided cavity-NV-center systems, we have proposed a hybrid photonic hyper-CNOT gate and a hybrid photonic hyper-Toffoli gate for hyperparallel photonic quantum computation. A hyperparallel hybrid quantum logic gate on a quantum system in both the polarization and spatial-mode DOFs is equal to the two identical quantum gates operating on that in one DOF simultaneously, and it can depress the resource consumption, photonic dissipation, and asymmetric environment noise of the double-sided cavity-NV-center system in the weak-coupling regime with a small Purcell factor. Besides, these hyperparallel quantum logic gates are useful for the quantum information protocols with multi-qubit systems in several DOFs, especially the generation and analysis of hyperentangled states \cite{multiqubit2,multiqubit3,multiqubit4}. The double-sided cavity QED can be used for quantum information processing even in a bad cavity regime (the Purcell regime) \cite{QD,QD4,RT}, and it is suitable to investigate the robust and flexible quantum information processing based on both the polarization and spatial-mode DOFs \cite{multiqubit2,multiqubit3,multiqubit4}, according to its reflection-transmission optical property. Besides the quantum computation with two DOFs of a photon as two qubits \cite{twoDOF,twoDOF1}, double-sided cavity QED can also be used for quantum information processing with two DOFs by using a photon as a qudit. Moreover, the multiqubit logic gate based on one DOF can be simplified with less photon resources by resorting to two DOFs of photon systems \cite{multiqubit}. \section*{ACKNOWLEDGMENTS} This work is supported by the National Natural Science Foundation of China under Grants Nos. 11174039 and 11474026, and NECT-11-0031. \end{document}
arXiv
The convergence rate analysis of the symmetric ADMM for the nonconvex separable optimization problems Stochastic comparisons of parallel systems with scale proportional hazards components equipped with starting devices Bayesian decision making in determining optimal leased term and preventive maintenance scheme for leased facilities Chih-Chiang Fang , School of Computer Science and Software, Zhaoqing University, Guangdong 526061, China *Corresponding author: Chih-Chiang Fang Received June 2019 Revised June 2020 Published August 2020 Under a business competitive environment, quite a few enterprises choose capital leasing to reduce tax payment and investment risk instead of buying facilities. Since the durability and service life of leased facilities will be longer, the breakdowns and deterioration of leased facilities are inevitable during lease period. Accordingly, in order to reduce the related costs and keep the facility's health during lease period, preventive maintenances are required to perform to reduce the cost of free-repair warranty and maintain customers' satisfaction. However, performing preventive maintenance is not easy to scheme due to the scarcity of historical failure data. Accordingly, the study integrates lease and maintenance decisions into a synthetic strategy, and it can be applied under the situation of only expert's evaluation and/or scare historical failure data by employing Bayesian analyses. In this study, the mathematical models and corresponding algorithms are developed to determine the best preventive maintenance scheme and the optimal term of contract for leased facilities to maximize the expected profit. Moreover, the computerized architecture is also proposed, and it can help the lessor to solve the issue in practice. Finally, numerical examples and the sensitive analyses are provided to illustrate the managerial strategies under different leased period and the preventive maintenance policies. Keywords: Bayesian analysis, nature conjugate prior, preventive maintenance, leased facility. Mathematics Subject Classification: Primary: 62C10, 62C25; Secondary: 90B25. Citation: Chih-Chiang Fang. Bayesian decision making in determining optimal leased term and preventive maintenance scheme for leased facilities. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2020127 N. Aras, R. Güllü and and S. Yürülemz, Optimal inventory and pricing policies for remanufacturable leased products, International Journal of Production Economics, 133 (2011), 262-271. doi: 10.1016/j.ijpe.2010.01.024. Google Scholar A. Ben Mabrouk, A. Chelbi and M. Radhoui, Optimal imperfect maintenance strategy for leased equipment, International Journal of Production Economics, 178 (2016), 57-64. doi: 10.1016/j.ijpe.2016.04.024. Google Scholar S. Bourjade, R. Huc and C. Muller-Vibes, Leasing and profitability: Empirical evidence from the airline industry, Transportation Research Part A, 97 (2017), 30-47. doi: 10.1016/j.tra.2017.01.001. Google Scholar J. Cao and W. Xie, Optimization of a condition-based duration-varying preventive maintenance policy for the stockless production system based on queueing model, Journal of Industrial and Management Optimization, 15 (2019), 1049-1083. doi: 10.3934/jimo.2018085. Google Scholar Y. H. Chun, Optimal number of periodic preventive maintenance operations under warranty, Reliability Engineering and System Safety, 37 (1992), 223-225. doi: 10.1016/0951-8320(92)90127-7. Google Scholar J. S. Dagpunar and N. Jack, Preventive maintenance strategy for equipment under warranty, Microelectronics Reliability, 34 (1994), 1089-1093. doi: 10.1016/0026-2714(94)90073-6. Google Scholar P. Desai and D. Purohit, Leasing and selling: Optimal marketing strategies for a durable goods firm, Management Science, 44 (1998), 19-34. doi: 10.1287/mnsc.44.11.S19. Google Scholar A. N. Das and A. N. Sarmah, Preventive replacement models: An overview and their application in process industries, European Journal of Industrial Engineering, 4 (2010), 280-307. doi: 10.1504/EJIE.2010.033332. Google Scholar S. H. Ding and S. Kamaruddin, Maintenance policy optimization-literature review and directions, The International Journal of Advanced Manufacturing Technology, 76 (2015), 1263-1283. doi: 10.1007/s00170-014-6341-2. Google Scholar M. Ebrahimi, S. M. T. F. Ghomi and B. Karimi, Application of the preventive maintenance scheduling to increase the equipment reliability: Case study - bag filters in cement factory, Journal of Industrial and Management Optimization, 16 (2020), 189-205. doi: 10.3934/jimo.2018146. Google Scholar C. C. Fang and Y. S. Huang, A study on decisions of warranty, pricing, and production with insufficient information, Computers and Industrial Engineering, 59 (2010), 241-250. doi: 10.1016/j.cie.2010.04.005. Google Scholar H. Garg, Reliability, availability and maintainability analysis of industrial systems using PSO and fuzzy methodology, MAPAN-Journal of Metrology Society of India, 29 (2014), 115-129. doi: 10.1007/s12647-013-0081-x. Google Scholar H. Garg, Bi-criteria optimization for finding the optimal replacement interval for maintaining the performance of the process industries, Modern Optimization Algorithms and Application in Engineering and Economics, 25 (2016), 33 pp. doi: 10.4018/978-1-4666-9644-0.ch025. Google Scholar H. Garg, M. Rani and S. P. Sharma, Preventive maintenance scheduling of the pulping unit in a paper plant, Japan Journal of Industrial and Applied Mathematics, 30 (2013), 397-414. doi: 10.1007/s13160-012-0099-4. Google Scholar H. Garg and S. P. Sharma, A two-phase approach for reliability and maintainability analysis of an industrial system, International Journal of Reliability, Quality and Safety Engineering, 19 (2012), 1250013. doi: 10.1142/S0218539312500131. Google Scholar B. Hadjaissa, K. Ameur, S. M. Ait Cheikh and N. Essounbouli, Bi-objective optimization of maintenance scheduling for power systems, The International Journal of Advanced Manufacturing Technology, 85 (2016), 1361-1372. doi: 10.1007/s00170-015-8053-7. Google Scholar F. Hu and Q. Zong, Optimal periodic preventive maintenance policy and lease period for leased equipment, Journal of Tianjin University Science and Technology, 41 (2008), 248-253. Google Scholar Y. S. Huang, A structural design of decision support systems for deteriorating repairable systems, Computers and Operations Research, 31 (2004), 1135-1145. doi: 10.1016/S0305-0548(03)00069-8. Google Scholar Y. S. Huang and V. M. Bier, A natural conjugate prior for the nonhomogeneous poisson process with a power law intensity function, Communications in Statistics-Simulation and Computation, 27 (1998), 525-551. doi: 10.1080/03610919808813493. Google Scholar Y. S. Huang and V. M. Bier, A natural conjugate prior for the nonhomogeneous Poisson process with an exponential intensity function, Communications in Statistics-Simulations and Computation, 28 (1999), 1479-1509. doi: 10.1080/03610929908832368. Google Scholar B. P. Iskandar and H. Husniah, Optimal preventive maintenance for a two dimensional lease contract, Computers and Industrial Engineering, 113 (2017), 693-703. doi: 10.1016/j.cie.2017.09.028. Google Scholar R. Jamshidi and M. M. Seyyed Esfahani, Maintenance policy determination for a complex system consisting of series and cold standby system with multiple levels of maintenance action, The International Journal of Advanced Manufacturing Technology, 78 (2015), 1137-1346. doi: 10.1007/s00170-014-6727-1. Google Scholar V. Jayabalan and D. Chaudhuri, Cost optimization of maintenance scheduling for a system with assured reliability, IEEE Transactions on Reliability, 41 (1992), 21-25. doi: 10.1109/24.126665. Google Scholar H. Jin, L. Hai and X. Tang, An optimal maintenance strategy for multi-state systems based on a system linear integral equation and dynamic programming, Journal of Industrial and Management Optimization, 16 (2020), 965-990. doi: 10.3934/jimo.2018188. Google Scholar B. S. Kim and Y. Ozturkoglu, Scheduling a single machine with multiple preventive maintenance activities and position-based deteriorations using genetic algorithms, The International Journal of Advanced Manufacturing Technology, 67 (2013), 1127-1137. doi: 10.1007/s00170-012-4553-x. Google Scholar C. S. Kim, I. Djamaludin, I. and D. N. P. Murthy, Warranty and discrete preventive maintenance, Reliability Engineering and System Safety, 84 (2004), 301-309. doi: 10.1016/j.ress.2003.12.001. Google Scholar R. T. Kleiman, The characteristics of venture lease financing, Journal of Equipment Lease Financing, 19 (2001), 1-10. Google Scholar C. Liu, Y. Fang, C. Zhao and J. Wang, Multiple common due-dates assignment and optimal maintenance activity scheduling with linear deteriorating jobs, Journal of Industrial and Management Optimization, 13 (2017), 713-720. doi: 10.3934/jimo.2016042. Google Scholar S. Martorell, A. Sanchez, A. and V. Serradell, Age-dependent reliability model considering effects of maintenance and working conditions, Reliability Engineering and System Safety, 64 (1999), 19-31. doi: 10.1016/S0951-8320(98)00050-7. Google Scholar W. T. Moore and S. N. Chen, The decision to lease or purchase under uncertainty: A Bayesian approach, The Engineering Economist, 29 (1984), 195-206. doi: 10.1080/00137918408967711. Google Scholar P. Müller and G. Parmigiani, Optimal design via curve fitting of Monte Carlo experiments, Journal of the American Statistical Association - Theory and Methods, 90 (1995), 1322-1330. doi: 10.2307/2291522. Google Scholar A. Nisbet and A. Ward, Radiotherapy equipment-purchase or lease?, The British Journal of Radiology, 74 (2000), 735-744. doi: 10.1259/bjr.74.884.740735. Google Scholar R. Niwas and H. Garg, An approach for analyzing the reliability and profit of an industrial system based on the cost free warranty policy, Journal of the Brazilian Society of Mechanical Sciences and Engineering, 40 (2018), Art. 265. doi: 10.1007/s40430-018-1167-8. Google Scholar I. A. Papazoglou, Bayesian decision analysis and reliability certification, Reliability Engineering and System Safety, 66 (1999), 177-198. doi: 10.1016/S0951-8320(99)00035-6. Google Scholar D. F. Percy, Bayesian enhanced strategic decision making for reliability, European Journal of Operational Research, 139 (2002), 133-145. doi: 10.1016/S0377-2217(01)00177-1. Google Scholar J. Pongpech and D. N. P. Murthy, Optimal periodic preventive maintenance policy for leased equipment, Reliability Engineering and System Safety, 91 (2006), 772-777. doi: 10.1016/j.ress.2005.07.005. Google Scholar Y. Saito, T. Dohi and W. Y. Yun, Uncertainty analysis for a periodic replacement problem with minimal repair: Parametric bootstrapping approach, International Journal of Industrial Engineering: Theory, Applications and Practice, 21 (2014), 337-347. Google Scholar J. Schutz and N. Rezg, Maintenance strategy for leased equipment, Computers and Industrial Engineering, 66 (2013), 593-600. doi: 10.1016/j.cie.2013.05.004. Google Scholar M. Sheikhalishahi, H. Heidaryan-Baygy, S. Abdolhossein Zadeh and A. Azadeh, Comparison between condition-based, age-based and failure-based maintenance policies in parallel and series configurations: A simulation analysis, International Journal of Industrial Engineering: Theory, Applications and Practice, 24 (2017), 295-305. Google Scholar J. Shin, J. R. Morrison and A. Kalir, Optimization of preventive maintenance plans in G/G/M queueing networks and numerical study with models based on semiconductor wafer fabs, International Journal of Industrial Engineering: Theory, Applications and Practice, 23(5) (2016), 302-317. Google Scholar D. W. Steeneck and S. C. Sarin, Product design for leased products under remanufacturing, International Journal of Production Economics, 202 (2018), 132-144. doi: 10.1016/j.ijpe.2018.04.025. Google Scholar J. Taheri-Tolgari, M. Mohammadi, B. Naderi, A. Arshadi-Khamseh and A. Mirzazadeh, An inventory model with imperfect item, inspection errors, preventive maintenance and partial backlogging in uncertainty environment, Journal of Industrial and Management Optimization, 15 (2019), 1317-1344. doi: 10.3934/jimo.2018097. Google Scholar G. Walter and S. D. Flapper, Condition-based maintenance for complex systems based on current component status and Bayesian updating of component reliability, Reliability Engineering and System Safety, 168 (2017), 227-239. doi: 10.1016/j.ress.2017.06.015. Google Scholar H. Wang and H. Pham, Some maintenance models and availability with imperfect maintenance in production systems, Annals of Operations Research, 91 (1999), 305-318. doi: 10.1023/A:1018910109348. Google Scholar X. Wang, L. Li and M. Xie, Optimal preventive maintenance strategy for leased equipment under successive usage-based contracts, International Journal of Production Research, 57 (2019), 5705-5724. doi: 10.1080/00207543.2018.1542181. Google Scholar C. W. Yeh and C. C. Fang, Optimal pro-rata warranty decision with consideration of the marketing strategy under insufficient historical reliability data, International Journal of Advanced Manufacturing Technology, 71 (2014), 1757-1772. doi: 10.1007/s00170-013-5596-3. Google Scholar R. H. Yeh and W. L. Chang, Optimal threshold value of failure-rate for Leased products with preventive maintenance actions, Mathematical and Computer Modelling, 46 (2007), 730-737. doi: 10.1016/j.mcm.2006.12.001. Google Scholar R. H. Yeh and H. C. Lo, Optimal preventive-maintenance warranty policy for repairable products, European Journal of Operational Research, 134 (2001), 59-69. doi: 10.1016/S0377-2217(00)00238-1. Google Scholar R. H. Yeh, K. C. Kao and W. L. Chang, Optimal preventive maintenance policy for leased equipment using failure rate reduction, Computers and Industrial Engineering, 57(1) (2009), 304-309. doi: 10.1016/j.cie.2008.11.025. Google Scholar R. H. Yeh, K. C. Kao and W. L. Chang, Preventive-maintenance policy for leased products under various maintenance costs, Expert Systems with Applications, 38 (2011), 3558-3562. doi: 10.1016/j.eswa.2010.08.144. Google Scholar Y. Zhang, X. Zhang, J. Zeng, J. Wang and S. Xue, Lessees satisfaction and optimal condition-based maintenance policy for leased system, Reliability Engineering and System Safety, 191 (2019), Art. 106532. doi: 10.1016/j.ress.2019.106532. Google Scholar Figure 1. Timeline of the PM Model Figure 2. Maintenance Scheme under Imperfect Recovery Figure 3. The Flow Chart of the Heuristic Process for Obtaining $ N^* $ and $ q^* $ Figure 4. Flowchart for the Bayesian Solution Algorithm Figure 5. Computerized Implementation Architecture Figure 6. Average Profits per Unit and Year for Maintenance Plans 1, 2, 3 Estimated by Prior Analysis Figure 7. Average Profits per Unit and Year for Maintenance Plan 1 Estimated by Prior and Posterior Analyses Figure 8. The Impact of $ E $($ \alpha $), $ E $($ \beta $), $ \sigma(\alpha) $ and $ \sigma(\beta) $ on Average Profit Figure 9. The Impact of Minimal Repair Cost on Average Profit Figure 10. The Impact of Base Cost for a PM action on Average Profit Figure 11. The Impact of Increasing Rate of PM Cost on Average Profit Figure 12. The Impact of Time Discount Rate on Average Profit Figure 13. The Impact of Depreciation Rate on Average Profit Table 1. The detailed information of three maintenance plans Maintenance Plan 1 Maintenance Plan 2 Maintenance Plan 3 Parameters for the deterioration judged by experts ${u_\alpha}$= 1.60, ${u_\beta}$= 2.10, ${\sigma_\alpha}$= 1.10, ${\sigma_\beta}$= 0.80 Age reduction factors ${\delta ^{M_P^1}}$ = 0.7 ${\delta ^{M_P^2}}$ = 0.8 ${\delta ^{M_P^3}}$ = 0.9 Base cost for a PM action $C_F^{M_P^1}$=600 $C_F^{M_P^2}$=750 $C_F^{M_P^3}$=900 Periodically increasing rates of PM cost ${\tau ^{M_P^1}}$=0.2 ${\tau ^{M_P^2}}$=0.225 data ${\tau ^{M_P^3}}$=0.25 Depreciation rate $\rho$=0.15 Interval of PM; Time segment $x$=0.5 years; $T_S$=0.5 year The minimal and maximal planned lease terms $T_L^{Min}$=2 years; $T_L^{Max}$=12 years Rental of per half-year $R_0$=9800 Time discount rate $\epsilon$=0.02 Production cost of an equipment $V$=9800 Penalty cost for repair time over the time limit $C_Penalty= 170$ Expectation of performing a minimal repair $E(t_r)$= 9 hours Standard deviation of performing a minimal repair $\sigma(t_r)$= 5 hours Tolerable waiting time limit for performing a minimal repair $\varphi$=4.5 hours Expected cost of performing a minimal repair $C_mr$=350 Table 2. Expected failures, repair costs, preventive costs, production cost, residual value and average profits per unit and year for maintenance plan 1, 2, 3 estimated by prior analysis $\mathop E\limits_{{\text{Prior}}} [\Phi ({T_L},x,{\delta ^{M_P^q}},\alpha ,\beta )]$ PM Cost Repair Cost $V$ ${V_{residual}}$ $\mathop E\limits_{{\text{Prior}}} \left[ \pi \right]$ Time Plan1 Plan2 Plan3 Plan1 Plan2 Plan3 Plan1 Plan2 Plan3 Plan1 Plan2 Plan3 2 3.06 2.74 2.44 2760 3495 4194 1500 1343 1194 98000 70805 3292 3003 2729 2.5 4.24 3.69 3.17 3600 4575 5490 2077 1807 1553 98000 65279 3472 3190 2926 4.5 10.97 8.66 6.63 7560 9720 11664 5373 4243 3250 98000 47164 3932 3703 3492 5 13.22 10.23 7.64 8700 11213 13455 6480 5011 3741 98000 43483 3987 3779 3584 5.5 15.75 11.94 8.70 9900 12788 15345 7716 5848 4261 98000 40089 4021 3835 3659 *6 18.55 13.79 9.81 11160 14445 17334 9090 6758 4809 98000 36961 *4033 3874 3718 6.5 21.65 15.80 10.99 12480 16185 19422 10611 7744 5386 98000 34076 4025 3896 3761 *7 25.08 17.98 12.23 13860 18008 21609 12289 8810 5993 98000 31417 3998 *3902 3790 *8 32.98 22.85 14.90 16800 21900 26280 16160 11194 7301 98000 26704 3885 3869 *3808 8.5 37.50 25.55 16.33 18360 23970 28764 18375 12521 8002 98000 24620 3802 3831 3798 9 42.43 28.46 17.83 19980 26123 31347 20793 13943 8736 98000 22698 3701 3780 3778 10 53.65 34.88 21.03 23400 30675 36810 26289 17089 10306 98000 19294 3448 3640 3705 10.5 59.99 38.41 22.74 25200 33075 39690 29394 18822 11143 98000 17788 3296 3553 3654 Table 3. Expected failures, repair costs, preventive cost, production cost, residual value and average profits per unit and year for prior and posterior analyses $E[\Phi ({T_L},x,{\delta ^{M_P^q}},\alpha ,\beta )]$ Repair Cost PM Cost $V$ ${V_{residual}}$ $E\left[ \pi \right]$ Time Prior Posterior Prior Posterior Prior Posterior 2 3.06 3.90 1500 1911 1500 98000 70805 3292 3087 2.5 4.24 5.82 2077 2856 2077 98000 65279 3472 3161 3.5 7.17 10.68 3514 5233 3514 98000 55487 3752 3261 4 8.95 13.58 4387 6655 4387 98000 51157 3854 3287 *4.5 10.97 16.79 5373 8226 5373 98000 47164 3932 *3298 5 13.22 20.29 6480 9944 6480 98000 43483 3987 3295 5.5 15.75 24.09 7716 11805 7716 98000 40089 4021 3277 *6 18.55 28.18 9090 13807 9090 98000 36961 *4033 3247 6.5 21.65 32.54 10611 15946 10611 98000 34076 4025 3204 7 25.08 37.19 12289 18222 12289 98000 31417 3998 3150 10 53.65 70.67 26289 34627 26289 98000 19294 3448 2614 10.5 59.99 77.15 29394 37805 29394 98000 17788 3296 2495 Table 4. The impact of $E(\alpha)$ and $\sigma(\alpha)$ on expected failure times and repair cost $E(\alpha)$ Time Expected failure times Expected repair cost 2 2.32 2.69 3.06 3.43 3.79 4.15 1136 1319 1500 1680 1858 4 6.30 7.61 8.95 10.34 11.75 13.21 3085 3727 4387 5065 5759 5 9.00 11.07 13.22 15.47 17.81 20.25 4412 5423 6480 7581 8726 6 12.26 15.31 18.55 21.98 25.59 29.42 6005 7500 9090 10769 12541 7 16.10 20.42 25.08 30.07 35.41 41.13 7890 10005 12289 14736 17352 8 20.60 26.51 32.98 40.01 47.61 55.88 10096 12989 16160 19603 23331 10 31.83 42.09 53.65 66.54 80.83 96.74 15596 20622 26289 32602 39606 11 38.69 51.83 66.85 83.80 102.81 124.23 18959 25398 32758 41060 50378 12 46.49 63.08 82.29 104.23 129.14 157.52 22781 30910 40324 51075 63280 $\sigma(\alpha)$ 5 14.14 13.65 13.22 12.85 12.51 12.20 6928 6688 6480 6295 6128 12 100.91 90.56 82.29 75.48 69.75 64.87 49444 44376 40324 36986 34179 Table 5. The impact of $E(\beta)$ and $\sigma(\beta)$ on expected failure times and repair cost $E(\beta)$ 1.5 1.7 1.9 2.1 2.3 2.5 2.7 1.5 1.7 1.9 2.1 2.3 2.5 2.7 2 3.02 3.07 3.08 3.06 3.03 2.98 2.92 1481 1502 1507 1500 1482 1458 1428 4 7.07 7.65 8.28 8.95 9.66 10.40 11.17 3463 3749 4059 4387 4732 5094 5476 5 9.64 10.66 11.87 13.22 14.73 16.40 18.26 4723 5226 5816 6480 7219 8038 8946 6 12.65 14.25 16.22 18.55 21.25 24.35 27.92 6199 6981 7950 9090 10410 11932 13681 7 16.17 18.47 21.45 25.08 29.42 34.59 40.70 7923 9050 10510 12289 14418 16947 19943 8 20.27 23.42 27.66 32.98 39.52 47.50 57.20 9932 11475 13552 16160 19366 23276 28026 9 25.04 29.19 34.97 42.43 51.82 63.53 78.08 12269 14301 17138 20793 25394 31132 38261 10 30.58 35.88 43.54 53.65 66.64 83.17 104.12 14982 17582 21336 26289 32653 40752 51016 11 37.00 43.63 53.52 66.85 84.31 106.94 136.13 18129 21377 26225 32758 41313 52400 66704 12 44.44 52.56 65.08 82.29 105.23 135.44 175.07 21777 25754 31890 40324 51561 66367 85783 $\sigma(\beta)$ 5 13.63 13.42 13.27 13.22 13.29 13.51 13.94 6680 6574 6505 6480 6511 6618 6833 6 18.85 18.59 18.47 18.55 18.88 19.55 20.76 9239 9107 9052 9090 9249 9579 10172 10 49.25 49.50 50.83 53.65 58.82 68.23 86.78 24134 24256 24905 26289 28824 33432 42524 Yen-Luan Chen, Chin-Chih Chang, Zhe George Zhang, Xiaofeng Chen. Optimal preventive "maintenance-first or -last" policies with generalized imperfect maintenance models. Journal of Industrial & Management Optimization, 2021, 17 (1) : 501-516. doi: 10.3934/jimo.2020149 Manxue You, Shengjie Li. Perturbation of Image and conjugate duality for vector optimization. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020176 Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020171 Shahede Omidi, Jafar Fathali. Inverse single facility location problem on a tree with balancing on the distance of server to clients. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021017 George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003 Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251 Qianqian Han, Xiao-Song Yang. Qualitative analysis of a generalized Nosé-Hoover oscillator. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020346 Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457 Vieri Benci, Sunra Mosconi, Marco Squassina. Preface: Applications of mathematical analysis to problems in theoretical physics. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020446 Thomas Y. Hou, Dong Liang. Multiscale analysis for convection dominated transport equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 281-298. doi: 10.3934/dcds.2009.23.281 Nahed Naceur, Nour Eddine Alaa, Moez Khenissi, Jean R. Roche. Theoretical and numerical analysis of a class of quasilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 723-743. doi: 10.3934/dcdss.2020354 Mohamed Dellal, Bachir Bar. Global analysis of a model of competition in the chemostat with internal inhibitor. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1129-1148. doi: 10.3934/dcdsb.2020156 Yining Cao, Chuck Jia, Roger Temam, Joseph Tribbia. Mathematical analysis of a cloud resolving model including the ice microphysics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 131-167. doi: 10.3934/dcds.2020219 Xin Guo, Lei Shi. Preface of the special issue on analysis in data science: Methods and applications. Mathematical Foundations of Computing, 2020, 3 (4) : i-ii. doi: 10.3934/mfc.2020026 Martin Kalousek, Joshua Kortum, Anja Schlömerkemper. Mathematical analysis of weak and strong solutions to an evolutionary model for magnetoviscoelasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 17-39. doi: 10.3934/dcdss.2020331 Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 Wen Li, Wei-Hui Liu, Seak Weng Vong. Perron vector analysis for irreducible nonnegative tensors and its applications. Journal of Industrial & Management Optimization, 2021, 17 (1) : 29-50. doi: 10.3934/jimo.2019097 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Chih-Chiang Fang
CommonCrawl
16w5085 HomeConfirmed Participants ScheduleWorkshop Videos Testimonials Schedule for: 16w5085 - Random Structures in High Dimensions Beginning on Sunday, June 26 and ending Friday July 1, 2016 All times in Oaxaca, Mexico time, CDT (UTC-5). 14:00 - 23:59 Check-in begins (Front desk at your assigned hotel) 19:30 - 22:00 Dinner (Restaurant Hotel Hacienda Los Laureles) 20:30 - 21:30 Informal gathering ↓ A welcome drink will be served at the hotel. (Hotel Hacienda Los Laureles) 07:30 - 08:45 Breakfast (Restaurant at your assigned hotel) 08:45 - 09:00 Introduction and Welcome (Conference Room San Felipe) 09:00 - 09:50 Geoffrey Grimmett: An algebraic approach to counting self-avoiding walks ↓ What can be said about the connective constant of a graph? The celebrated work of Hara and Slade on self-avoiding walks is directed largely at the effect of dimension. This talk is devoted to recent work (with Zhongyang Li) based more on considerations of algebra and combinatorics than of geometry or analysis. We will report on inequalities for connective constants, and will present a locality theorem. Cayley graphs of finitely generated groups are examples of special interest, and we shall discuss the relevance of amenability. The Cayley graph of the Grigorchuk group (which has a compressed exponential growth rate) poses special challenges. (Conference Room San Felipe) 10:00 - 10:50 Tony Guttmann: Random and self-avoiding walks subject to tension and compression ↓ In recent years there have been important experiments involving the pulling of polymers from a wall. These are carried out with atomic force microscopes and other devices to determine properties of polymers, including biological polymers such as DNA. We have studied a simple model of this system, comprising two-dimensional self-avoiding walks, anchored to a wall at one end and then pulled from the wall at the other end. In addition, we allow for binding of monomers in contact with the wall. The geometry is shown in the following figure: There are two parameters in the model, the strength of the interaction of monomers with the surface (wall), and the force, normal to the wall, pulling the polymer. We have constructed (numerically) the complete phase diagram, and can prove the locus of certain phase boundaries in that phase diagram, and also the order of certain phase transitions as the phase boundaries are crossed. A schematic of the phase diagram is shown below. Most earlier work focussed on simpler models of random, directed and partially directed walk models. There has been little numerical work on the more realistic SAW model. A recent rigorous treatment by van Rensburg and Whittington established the existence of a phase boundary between an adsorbed phase and a ballistic phase when the force is applied normal to the surface. We give the first proof that this phase transition is first-order. As well as finding the phase boundary very precisely, we also estimate various critical points and exponents to high precision, or, in some cases exactly (conjecturally). We use exact enumeration and series analysis techniques to identify this phase boundary for SAWs on the square lattice. Our results are derived from a combination of three ingredients: (i) Rigorous results. (ii) Faster algorithms giving extended series data. (iii) New numerical techniques to extract information from the data. A second calculation considers polymers squeezed towards a surface by a second wall parallel to the surface wall. In this problem we ignore the interaction between surface monomers and the wall. We find, remarkably, that in this geometry there arises an unexpected stretched exponential term in the asymptotic expression for the number of configurations. We show explicitly that this can occur even if one uses simple random walks as the polymer model, rather than the more realistic self-avoiding walks. Aspects of this work have been carried out with Nick Beaton, Iwan Jensen, Greg Lawler and Stu Whittington. 11:00 - 11:30 Coffee Break (Conference Room San Felipe)) 11:30 - 12:20 Omer Angel: Random walks on half planar maps ↓ We study random walks on random planar maps with the half plane topology. In the parabolic case we prove recurrence, and in the hyperbolic case positive speed away from the boundary. Joint works with Gourab Ray and Asaf Nachmias. 12:30 - 12:40 Group Photo (Hotel Hacienda Los Laureles) 13:30 - 15:00 Lunch (Restaurant Hotel Hacienda Los Laureles) 15:00 - 15:50 Tom Kennedy: The first order correction to the exit distribution for some random walks ↓ We consider three random walk models on several two-dimensional lattices - the usual nearest neighbor random walk, the nearest neighbor random walk without backtracking and the smart kinetic walk (a type of self-avoiding walk). For all these models the distribution of the point where the walk exits a simply connected domain in the plane converges weakly to the harmonic measure on the boundary as the lattice spacing goes to zero. We study the first order correction, i.e., the limit of the difference divided by the lattice spacing. Monte Carlo simulations lead us to conjecture that this measure has density $c f(z)$ where the function $f(z)$ only depends on the domain and the constant $c$ only depends on the model and the lattice. So there is a form of universality for this first order correction. For a particular random walk model with continuously distributed steps we can prove the conjecture. 16:00 - 16:30 Coffee Break (Conference Room San Felipe) 16:30 - 17:20 Balint Toth: Central limit theorem for random walks in doubly stochastic random environment ↓ We prove a CLT under diffusive scaling for the displacement of a random walk on $Z^d$ in stationary and ergodic doubly stochastic random environment, under the $H_{-1}$-condition imposed on the drift field. The condition is equivalent to assuming that the stream tensor of the drift field be stationary and square integrable. Joint work with Gady Kozma. 09:00 - 09:50 Remco Van der Hofstad: Progress in high-dimensional percolation ↓ A major breakthrough in percolation was the 1990 result by Hara and Slade proving mean-field behavior of percolation in high-dimensions, showing that at criticality there is no percolation and identifying several percolation critical exponents. The main technique used is the lace expansion, a perturbation technique that allowed Hara and Slade to compare percolation paths to random walks based on the idea that faraway pieces of percolation paths are almost independent in high dimensions. In this talk, we describe these seminal 1990 results, as well as a number of novel results for high-dimensional percolation that have been derived since and that build on the shoulders of these giants. Time permitting, I intend to highlight the following topics: (1) Critical percolation on the tree and critical branching random walk to fix ideas and to obtain insight in the kind of results that can be proved in high-dimensional percolation; (2) The recent computer-assisted proof, with Robert Fitzner, that identifies the critical behavior of nearest-neighbor percolation above 11 dimensions using the so-called Non-Backtracking Lace Expansion (NoBLE) that builds on the unpublished work by Hara and Slade proving mean-field behavior above 18 dimension; (3) The identification of arm exponents in high-dimensional percolation in two works by Asaf Nachmias and Gady Kozma, using a clever and novel difference inequality argument, and its implications for the incipient infinite cluster and random walks on them; (4) Super-process limits of large critical percolation clusters and the incipient infinite cluster. We assume no prior knowledge about percolation. 10:00 - 10:50 Akira Sakai: The lace expansion for the nearest-neighbor models on the BCC lattice ↓ The lace expansion was initiated by Brydges and Spencer in 1985. Since then, it has been a powerful tool to rigorously prove mean-field (MF) results for various statistical-mechanical models in high dimensions. For example, Hara and Slade succeeded in showing the MF behavior for nearest-neighbor self-avoiding walk on $\mathbb{Z}^{d \geq 5}$. Recently, van der Hofstad and Fitzner managed to prove the MF results for nearest-neighbor percolation on $\mathbb{Z}^{d \geq 11}$ by using the so-called NoBLE (Non-Backtracking Lace Expansion). For sufficiently spread-out percolation, however, the MF results are known to hold for all $d$ above the percolation upper-critical dimension 6, without using the NoBLE. Our goal is to show the MF behavior for the nearest-neighbor models, for all $d$ above the model-dependent upper-critical dimension, in a simpler and more accessible way. To achieve this goal, we consider the nearest-neighbor models on the $d$-dimensional BCC (Body-Centered Cube) lattice. (This is just like working on the triangular or hexagonal lattice instead of the square lattice in two dimensions.) Because of the nice properties of the BCC lattice, we can simplify the analysis and more easily prove the mean-field results for $d$ close to the corresponding upper-critical dimension, currently $d \geq 6$ for self-avoiding walk and $d \geq 10$ for percolation. This talk is based on joint work with Lung-Chi Chen, Satoshi Handa and Yoshinori Kamijima for self-avoiding walk, and on joint work with the above three colleagues and Markus Heydenreich for percolation. 11:30 - 12:20 Markus Heydenreich: The backbone scaling limit of high-dimensional incipient infinite cluster ↓ By incipient infinite cluster we denote critical percolation conditioned on the cluster of the origin to be infinite. This conditional measure, which is achieved as a suitable limiting scheme, is singular w.r.t. (ordinary) critical percolation. We define the backbone $B$ as the set of those vertices $x$, for which $\{x connected to the origin\}$ and $\{x connected to infinity\}$ occur disjointly. Our main result is that $B$, properly rescaled, converges to a Brownian motion path in sufficiently high dimension. One interpretation of this result is that spatial dependencies of the backbone vanish in the scaling limit. The result is achieved through a lace expansion of events of the form $P(x and y are connected and there are m pivotal bonds between x and y)$. This extends the original Hara-Slade expansion for percolation and gives rise to some new diagrammatic estimates. The talk is based on joint work with R. van der Hofstad, T. Hulshof, and G. Miermont. 15:00 - 15:50 Hao Shen: A stochastic PDE with U(1) gauge symmetry ↓ We consider the problem of constructing the Langevin dynamic of a lattice U(1) gauge theory in two spatial dimensions. The model consists of a vector field and a scalar field interacting on a 2D lattice, and we study the continuum limit of its natural dynamic for short time. This dynamic is not a priorly parabolic, but we can turn it into a parabolic system with a time-dependent family of U(1) gauge transformations; we then apply Hairer's theory of regularity structures to the parabolic equations. 16:30 - 17:20 Roman Kotecky: Emergence of long cycles for random interchange process on hypercubes ↓ Motivated by phase transitions in quantum spin models, we study random permutations of vertices (induced by products of uniform independent random transpositions on edges) in the case of high-dimensional hypercubes. We establish the existence of a transition accompanied by emergence of cycles of diverging lengths. (Joint work with Piotr Miłoś and Daniel Ueltschi.) 19:00 - 21:00 Dinner + Reception (Restaurant Hotel Hacienda Los Laureles) 08:30 - 12:30 Tour to Monte Alban ↓ Go to Hotel Hacienda los Laureles at 8:30 am. to board bus or buses. Price: $300.00 Mexican Pesos per person and the payment will be directly with the company when staff of Turismo el Convento arrive to the hotel and you can pay in cash or credit card. This price includes: Passenger insurance Certified Guide Licensed Driver Bottle water Admission Round transportation from the hotel (--) 13:00 - 14:30 Lunch: Note earlier time than usual (Restaurant Hotel Hacienda Los Laureles) 14:30 - 15:20 Federico Camia: Random loops in statistical mechanics and Euclidean field theory ↓ Kurt Symanzik and others recognized since the 1960s that the study of the (lattice) fields associated with certain now-classical models of statistical mechanics and Euclidean field theory leads naturally to consider random loop models. These loop models are interesting in their own right, and have recently been the focus of renewed attention. In this talk, I will brielfy introduce the Symanzik polymer representation of Euclidean field theory, and use it as a starting point to define new random fields with interesting properties, thus completing the loop. (Partly based on joint work with Marcin Lis, and with Alberto Gandolfi and Matthew Kleban.) 15:30 - 16:20 Antal Jarai: Inequalities for critical exponents in d-dimensional sandpiles ↓ We prove rigorous upper and lower bounds for some critical exponents in Abelian sandpiles in dimensions d >= 2: these concern the toppling probability, the avalanche radius and the avalanche cluster size. In d > 4, we establish the mean-field exponent for the radius apart from a logarithmic factor. (Joint work with Jack Hanson and Sandeep Bhupatiraju.) 17:00 - 17:50 Mark Holmes: Weak convergence of historical processes ↓ Under the usual formulation of weak convergence of branching particle systems to super-Brownian motion, the state of the process at a fixed time is a measure on $R^d$. As a result, the weak convergence statement does not encode the genealogy present in e.g. the voter model and lattice trees. In joint work-in-progress with Ed Perkins we consider weak convergence of the so-called historical processes (where the state of the process at a fixed time is a measure on genealogical paths in $R^d$) for these models. 09:00 - 09:50 Greg Lawler: Uniform Spanning Forests and Bi-Laplacian Gaussian Field ↓ We construct the bi-Laplacian Gaussian field on $R^4$ as a scaling limit of a field in $Z^4$ constructed using a wired spanning forest. The proof requires improving the known results about four dimensional loop-erased random walk. There are similar (and somewhat easier) results for higher dimensions. This is joint work with Xin Sun and Wei Wu. 10:00 - 10:50 Charles Newman: Minimal Spanning Tree on a Slab ↓ In joint work with Vincent Tassion and Wei Wu, we have studied the minimal spanning forests on the nearest neighbor slabs with vertex sets such as $Z^2 \times \{0,1,...k\}^{d-2}$. For $Z^d$ itself, it is known that the forest is a single tree for $d = (1 and) 2$ but nothing is known for $d>2$ except it is conjectured that the $d=2$ behavior continues until some $d_c$ (probably 6 or 8) above which there are infinitely many trees in the forest. Our result is that, in slabs, there is only a single tree. The work is related to that of Duminil-Copin, Sidoravicius and Tassion who proved that there is no infinite cluster in critical Bernoulli percolation in such slabs. We also get new results for that critical percolation setting. 11:00 - 11:30 Marek Biskup: Structure of extreme local maxima of 2D Discrete Gaussian Free Field ↓ I will attempt to explain the recent progress in our understanding of the shape of the large peaks in a typical sample of the two-dimensional Discrete Gaussian Free Field over a large but finite domain in the square lattice. As a consequence, I will give ideas from the construction of the supercritical Liouville Quantum Gravity measure, as well as a proof of the so called freezing phenomenon associated with this process. Based on joint work with Oren Louidor. 15:00 - 15:50 Takashi Kumagai: Time changes of stochastic processes associated with resistance forms ↓ In recent years, interest in time changes of stochastic processes according to irregular measures has arisen from various sources. Fundamental examples of such time-changed processes include the so-called Fontes-Isopi-Newman (FIN) diffusion, the introduction of which was motivated by the study of localization and aging properties of physical spin systems, and the two-dimensional Liouville Brownian motion, which is the diffusion naturally associated with planar Liouville quantum gravity. The FIN diffusion is known to be the scaling limit of the one-dimensional Bouchaud trap model, and the two-dimensional Liouville Brownian motion is conjectured to be the scaling limit of simple random walk on random planar maps. We will provide a general framework for studying such time changed processes and their discrete approximations in the case when the underlying stochastic process is strongly recurrent, in the sense that it can be described by a resistance form, as introduced by J. Kigami. In particular, this includes the case of Brownian motion on tree-like spaces and low-dimensional self-similar fractals. If time permits, we also discuss heat kernel estimates for the relevant time-changed processes. This is a joint work with D. Croydon (Warwick) and B.M. Hambly (Oxford). 16:30 - 17:20 Geronimo Uribe Bravo: Affine processes and multiparameter time changes ↓ We present a time change construction of affine processes on $R_+^m \times R^n$. These processes were systematically studied in (Duffie, Filipovi\'c and Schachermayer, 2003), since they contain interesting classes of processes such as L\'evy processes, continuous branching processes with immigration, and processes of the Ornstein-Uhlenbeck type. The construction is based on a (basically) continuous functional of a multidimensional L\'evy process, which implies that limit theorems for L\'evy processes (both almost sure and in distribution) can be inherited to affine processes. The construction can be interpreted as a multiparameter time change scheme or as a (random) ordinary differential equation driven by discontinuous functions. In particular, we propose approximation schemes for affine processes based on the Euler method for solving the associated discontinuous ODEs, which are shown to converge. 09:00 - 09:50 Christine Soteros: The embedding complexity of closed curves (polygons) and surfaces (closed 2-manifolds) in tubes in $Z^d$ ↓ There has been much interest in the embedding complexity of curves and surfaces in lattices, including the differences in exponential growth rates for embeddings subject to different topological constraints. This includes questions about knotting and linking for simple closed curves and graphs in $Z^3$, to model the entanglement complexity of flexible polymer molecules, and questions about embeddings of random surfaces in $Z^d$ and the effects of genus and the number of boundary components on their exponential growth rate. Despite much study, there are a number of conjectures about the complexity of these embeddings that remain unproved. Restricting the geometry by confining the curves or surfaces to a tube (or prism) in $Z^d$, however, makes the system quasi-one-dimensional and potentially more tractable. Restricting to a tube is also of interest for exploring the effects of geometrical constraints, such as when modelling polymers under confinement. In this talk, I will review recent results about the topological complexity of polygons and closed 2-manifolds embedded in tubes in $Z^d$. For the case of polygons in a $2 x 1 x \infty$ sublattice of $Z^3$, knot theory results of Shimokawa and Ishihara lead to a proof that polygons with fixed knot type have the same exponential growth rate as unknotted polygons. For closed 2-manifolds in a tube in $Z^d$, if the embeddings are orientable with fixed genus $d \neq 4$, we prove with Sumners and Whittington that the exponential growth rate is independent of the genus and obtain a similar result for the non-orientable case when $d>4$. More generally, transfer matrix arguments can be used to prove pattern theorems and we establish, for example, that: the typical genus of a closed 2-manifold embedding increases with the size of the manifold; orientable manifolds are exponentially rare when $d>4$; and for $d=4$ all except exponentially few 2-manifolds contain a local knotted (4,2)-ball pair. 10:00 - 10:50 Jesse Goodman: Long and short paths in first passage percolation on complete graphs. ↓ In a connected graph with random positive edge weights, pairs of vertices can be joined to obtain an a.s. unique path of minimal total weight. It is natural to ask about the typical total weight of such optimal paths, and about the number of edges they contain. To this end we consider the first passage percolation exploration process, which tracks the flow of fluid travelling across edges at unit speed and therefore discovers optimal paths in order of length. On the complete graph, adding exponential edge weights results in optimal paths with logarithmically many edges - the same "small world" path lengths that are typical of many random graphs. However, by changing the edge weight distribution, we can obtain paths that are asymptotically shorter or longer than logarithmic. This talk will explain how tail properties of the edge weight distribution can be translated quite precisely into scaling properties of optimal paths. 11:30 - 12:20 Roland Bauerschmidt: The renormalisation group ↓ The renormalisation group has been Gordon Slade's main focus of research for the past decade. I will explain some of the ideas and results.
CommonCrawl
Mathematics > Metric Geometry [Submitted on 21 Mar 2021 (v1), last revised 30 Jun 2021 (this version, v2)] Title:On a conjectural symmetric version of Ehrhard's inequality Authors:Galyna V. Livshyts Abstract: We formulate a plausible conjecture for the optimal Ehrhard-type inequality for convex symmetric sets with respect to the Gaussian measure. Namely, letting $J_{k-1}(s)=\int^s_0 t^{k-1} e^{-\frac{t^2}{2}}dt$ and $c_{k-1}=J_{k-1}(+\infty)$, we conjecture that the function $F:[0,1]\rightarrow\mathbb{R},$ given by $$F(a)= \sum_{k=1}^n 1_{a\in E_k}\cdot(\beta_k J_{k-1}^{-1}(c_{k-1} a)+\alpha_k)$$ (with an appropriate choice of a decomposition $[0,1]=\cup_{i} E_i$ and coefficients $\alpha_i, \beta_i$) satisfies, for all symmetric convex sets $K$ and $L,$ and any $\lambda\in[0,1]$, $$ F\left(\gamma(\lambda K+(1-\lambda)L)\right)\geq \lambda F\left(\gamma(K)\right)+(1-\lambda) F\left(\gamma(L)\right). $$ We explain that this conjecture is ``the most optimistic possible'', and is equivalent to the fact that for any symmetric convex set $K,$ its \emph{Gaussian concavity power} $p^s(K,\gamma)$ is greater than or equal to $p_s(RB^k_2\times \mathbb{R}^{n-k},\gamma),$ for some $k\in \{1,...,n\}$. We call the sets $RB^k_2\times \mathbb{R}^{n-k}$ round $k$-cylinders; they also appear as the conjectured Gaussian isoperimetric minimizers for symmetric sets, see Heilman \cite{Heilman}. In this manuscript, we make progress towards this question, and prove certain inequality for which the round k-cylinders are the only equality cases. As an auxiliary result on the way to the equality case characterization, we characterize the equality cases in the ``convex set version'' of the Brascamp-Lieb inequality, and moreover, obtain a quantitative stability version in the case of the standard Gaussian measure; this may be of independent interest. Comments: 82 pages; part of the initial version of this paper became a separate paper, arxiv 3818518 Subjects: Metric Geometry (math.MG); Analysis of PDEs (math.AP); Differential Geometry (math.DG); Probability (math.PR) Cite as: arXiv:2103.11433 [math.MG] (or arXiv:2103.11433v2 [math.MG] for this version) From: Galyna Livshyts [view email] [v1] Sun, 21 Mar 2021 16:33:00 UTC (378 KB) [v2] Wed, 30 Jun 2021 21:11:32 UTC (371 KB) math.MG math.AP math.DG math.PR
CommonCrawl
# The importance of design patterns in front-end development Design patterns are reusable solutions to common problems that occur in software design. They provide a blueprint for solving complex problems that can be adapted to various situations. In front-end development, design patterns play a crucial role in streamlining the development process, improving code maintainability, and ensuring consistency across projects. For example, consider a front-end application that displays a list of products. Without design patterns, you might end up with a spaghetti code of JavaScript functions, HTML templates, and CSS styles. With the help of design patterns, you can structure your code into a clear and organized architecture. ## Exercise Instructions: - List three common problems that arise in front-end development. - Explain how design patterns can help solve these problems. ### Solution 1. Complexity: Design patterns can help organize and structure complex code into a clear and modular architecture. 2. Code duplication: Design patterns provide reusable solutions to common problems, reducing code duplication. 3. Maintainability: Design patterns promote code reusability and modularity, making it easier to maintain and update code. # MVC: Model-View-Controller design pattern The MVC pattern is a popular design pattern in front-end development that separates an application into three interconnected components: Model, View, and Controller. This separation of concerns allows for efficient development, testing, and maintenance of complex applications. - Model: Represents the data and business logic of the application. - View: Handles the presentation of the data to the user. - Controller: Acts as an intermediary between the Model and View, managing the flow of data and user interactions. Consider a simple front-end application that displays a list of products. The Model would handle the data and business logic, the View would render the HTML and CSS, and the Controller would manage user interactions and update the View accordingly. ## Exercise Instructions: - Create a simple front-end application using the MVC pattern. - Describe the Model, View, and Controller components in your application. ### Solution ```html <!-- View --> <div id="products"> <h2>Product List</h2> <ul id="product-list"></ul> </div> <script> // Model const products = [ { id: 1, name: 'Product 1', price: 100 }, { id: 2, name: 'Product 2', price: 200 }, ]; // View const productList = document.getElementById('product-list'); products.forEach(product => { const listItem = document.createElement('li'); listItem.textContent = `${product.name} - $${product.price}`; productList.appendChild(listItem); }); // Controller // In this simple example, there is no need for a separate controller. // The view and model are directly interacting. // However, in a more complex application, a controller would manage user interactions and update the view accordingly. </script> ``` # MVVM: Model-View-ViewModel design pattern The MVVM pattern is an extension of the MVC pattern that introduces a ViewModel layer between the Model and View. The ViewModel acts as an intermediary between the Model and View, handling data transformations and user interaction logic. - Model: Represents the data and business logic of the application. - View: Handles the presentation of the data to the user. - ViewModel: Transforms and manages the data from the Model for the View. Continuing with the previous example, the MVVM pattern would involve creating a ViewModel that transforms the data from the Model into a format suitable for the View. ## Exercise Instructions: - Modify the previous front-end application to use the MVVM pattern. - Describe the Model, View, and ViewModel components in your application. ### Solution ```html <!-- View --> <div id="products"> <h2>Product List</h2> <ul id="product-list"></ul> </div> <script> // Model const products = [ { id: 1, name: 'Product 1', price: 100 }, { id: 2, name: 'Product 2', price: 200 }, ]; // ViewModel const viewModel = { getProductList: () => { return products.map(product => { return { id: product.id, displayName: `${product.name} - $${product.price}`, }; }); }, }; // View const productList = document.getElementById('product-list'); viewModel.getProductList().forEach(product => { const listItem = document.createElement('li'); listItem.textContent = product.displayName; productList.appendChild(listItem); }); // Controller // In this simple example, there is no need for a separate controller. // The view and viewModel are directly interacting. // However, in a more complex application, a controller would manage user interactions and update the view accordingly. </script> ``` # The Observer pattern and its implementation in front-end development The Observer pattern is a behavioral design pattern that defines a one-to-many dependency between objects. In front-end development, this pattern is commonly used to handle events and updates in a reactive programming style. - Subject: Represents the object that is being observed. - Observer: Represents the objects that are dependent on the Subject. Consider a front-end application that displays a list of products and updates the view when a new product is added. The Subject would be the list of products, and the Observer would be the View. ## Exercise Instructions: - Implement the Observer pattern in the previous front-end application. - Describe the Subject and Observer components in your application. ### Solution ```html <!-- View --> <div id="products"> <h2>Product List</h2> <ul id="product-list"></ul> </div> <script> // Model const products = [ { id: 1, name: 'Product 1', price: 100 }, { id: 2, name: 'Product 2', price: 200 }, ]; // ViewModel const viewModel = { getProductList: () => { return products.map(product => { return { id: product.id, displayName: `${product.name} - $${product.price}`, }; }); }, }; // View const productList = document.getElementById('product-list'); viewModel.getProductList().forEach(product => { const listItem = document.createElement('li'); listItem.textContent = product.displayName; productList.appendChild(listItem); }); // Controller const subject = { subscribe: (observer) => { this.observers.push(observer); }, unsubscribe: (observer) => { this.observers = this.observers.filter(obs => obs !== observer); }, notify: () => { this.observers.forEach(observer => observer.update()); }, observers: [], }; const observer = { update: () => { productList.innerHTML = ''; viewModel.getProductList().forEach(product => { const listItem = document.createElement('li'); listItem.textContent = product.displayName; productList.appendChild(listItem); }); }, }; // Subscribe the view as an observer subject.subscribe(observer); // Simulate adding a new product products.push({ id: 3, name: 'Product 3', price: 300 }); subject.notify(); </script> ``` # The Factory pattern and its uses in front-end development The Factory pattern is a creational design pattern that provides an interface for creating objects in a superclass, but allows subclasses to alter the type of objects that will be created. In front-end development, this pattern is commonly used to create objects with specific configurations or behaviors. - Factory: Represents the interface for creating objects. - ConcreteFactory: Represents the specific implementation of the Factory. Consider a front-end application that displays a list of products and allows the user to filter the list based on product type. The Factory pattern would be used to create different types of product filters. ## Exercise Instructions: - Implement the Factory pattern in the previous front-end application. - Describe the Factory and ConcreteFactory components in your application. ### Solution ```html <!-- View --> <div id="products"> <h2>Product List</h2> <ul id="product-list"></ul> </div> <script> // Model const products = [ { id: 1, name: 'Product 1', price: 100, type: 'Electronics' }, { id: 2, name: 'Product 2', price: 200, type: 'Clothing' }, { id: 3, name: 'Product 3', price: 300, type: 'Electronics' }, ]; // Factory const filterFactory = { createFilter: (type) => { if (type === 'Electronics') { return new ElectronicsFilter(); } else if (type === 'Clothing') { return new ClothingFilter(); } }, }; // ConcreteFactory class ElectronicsFilter { filter(products) { return products.filter(product => product.type === 'Electronics'); } } class ClothingFilter { filter(products) { return products.filter(product => product.type === 'Clothing'); } } // ViewModel const viewModel = { getProductList: (filterType) => { const filter = filterFactory.createFilter(filterType); return filter.filter(products); }, }; // View const productList = document.getElementById('product-list'); viewModel.getProductList('Electronics').forEach(product => { const listItem = document.createElement('li'); listItem.textContent = `${product.name} - $${product.price}`; productList.appendChild(listItem); }); // Controller // In this simple example, there is no need for a separate controller. // The view and viewModel are directly interacting. // However, in a more complex application, a controller would manage user interactions and update the view accordingly. </script> ``` # The Singleton pattern and its benefits in front-end development The Singleton pattern is a creational design pattern that ensures that a class has only one instance and provides a global point of access to that instance. In front-end development, this pattern is commonly used to manage shared resources or state. - Singleton: Represents the class that ensures only one instance is created. Consider a front-end application that manages user authentication. The Singleton pattern would be used to create a single instance of an AuthenticationService, ensuring that the authentication state is shared across all components of the application. ## Exercise Instructions: - Implement the Singleton pattern in the previous front-end application. - Describe the Singleton component in your application. ### Solution ```html <!-- View --> <div id="products"> <h2>Product List</h2> <ul id="product-list"></ul> </div> <script> // Model const products = [ { id: 1, name: 'Product 1', price: 100 }, { id: 2, name: 'Product 2', price: 200 }, ]; // Singleton class AuthenticationService { constructor() { if (!AuthenticationService.instance) { AuthenticationService.instance = this; } return AuthenticationService.instance; } authenticateUser() { // Authentication logic } } // ViewModel const viewModel = { getProductList: () => { return products.map(product => { return { id: product.id, displayName: `${product.name} - $${product.price}`, }; }); }, }; // View const productList = document.getElementById('product-list'); viewModel.getProductList().forEach(product => { const listItem = document.createElement('li'); listItem.textContent = product.displayName; productList.appendChild(listItem); }); // Controller // In this simple example, there is no need for a separate controller. // The view and viewModel are directly interacting. // However, in a more complex application, a controller would manage user interactions and update the view accordingly. // Use the Singleton pattern to create an instance of the AuthenticationService const authService = new AuthenticationService(); </script> ``` # Applying design patterns in real-world front-end development projects Design patterns can be applied to real-world front-end development projects to improve code organization, maintainability, and scalability. Some common use cases include: - Structuring complex applications using the MVC or MVVM pattern. - Implementing event-driven programming using the Observer pattern. - Creating reusable components and widgets using the Factory pattern. - Managing shared resources or state using the Singleton pattern. In a real-world front-end development project, the MVC pattern could be used to structure the application into separate components for data, presentation, and user interaction. The Observer pattern could be used to handle events and updates in a reactive programming style. The Factory pattern could be used to create different types of components or widgets with specific configurations or behaviors. The Singleton pattern could be used to manage shared resources or state, ensuring that the authentication state is shared across all components of the application. ## Exercise Instructions: - Apply the design patterns discussed in this textbook to a real-world front-end development project. - Describe the specific design patterns used and their impact on the project. ### Solution ```html <!-- Project structure --> - Model - Product.js - User.js - View - ProductList.js - UserProfile.js - Controller - ProductController.js - UserController.js <!-- Implementation --> - Use the MVC pattern to structure the project into Model, View, and Controller components. - Use the Observer pattern to handle events and updates in a reactive programming style. - Use the Factory pattern to create different types of components or widgets with specific configurations or behaviors. - Use the Singleton pattern to manage shared resources or state, ensuring that the authentication state is shared across all components of the application. ``` # Case studies of successful front-end projects that use design patterns Several successful front-end projects have used design patterns to streamline their development process, improve code maintainability, and ensure consistency across projects. Some examples include: - React: A popular JavaScript library for building user interfaces, which uses the MVC pattern to separate data and business logic from presentation. - Angular: A powerful JavaScript framework for building front-end applications, which uses the MVC pattern and the Dependency Injection pattern to manage components and their dependencies. - Vue.js: A lightweight and flexible JavaScript framework for building user interfaces, which uses the MVVM pattern to separate data and business logic from presentation. In the case of React, the MVC pattern is used to structure the application into components, which handle their own data and presentation logic. This separation of concerns allows for efficient development, testing, and maintenance of complex applications. ## Exercise Instructions: - Research and analyze three successful front-end projects that use design patterns. - Describe the specific design patterns used in each project and their impact on the project's success. ### Solution 1. React: - Design Pattern: MVC - Impact: Streamlined development process, improved code maintainability, and consistency across projects. 2. Angular: - Design Pattern: MVC, Dependency Injection - Impact: Enhanced component-based architecture, efficient dependency management, and improved scalability. 3. Vue.js: - Design Pattern: MVVM - Impact: Lightweight and flexible framework, efficient data and presentation management, and improved performance. # Advantages and disadvantages of using design patterns in front-end development Using design patterns in front-end development has both advantages and disadvantages. Some advantages include: - Improved code organization and separation of concerns. - Enhanced code reusability and modularity. - Streamlined development process and improved maintainability. Some disadvantages include: - Increased complexity and learning curve for new developers. - Overuse of patterns can lead to unnecessary abstractions and complexity. - Potential for over-engineering or over-optimization. In a well-designed front-end application, the use of design patterns can lead to a clean, organized codebase that is easy to understand and maintain. However, overuse of patterns or unnecessary abstractions can lead to unnecessary complexity and bloat. ## Exercise Instructions: - Describe two scenarios where using design patterns in front-end development can lead to over-engineering or over-optimization. ### Solution 1. Overuse of the Singleton pattern: In a small front-end application, creating a single instance of a service or manager may not be necessary. This can lead to unnecessary complexity and potential for future issues. 2. Overuse of the Factory pattern: In a front-end application with a limited number of components or widgets, creating a separate Factory for each component may be overkill. This can lead to unnecessary complexity and potential for future issues. # Best practices for implementing design patterns in front-end development When implementing design patterns in front-end development, it is important to follow best practices to ensure efficient and maintainable code. Some best practices include: - Understand the purpose and benefits of the design pattern being used. - Apply the pattern to the appropriate components or modules of the application. - Keep the pattern implementation simple and focused on its specific purpose. - Avoid overuse or over-optimization of patterns. - Document and communicate the use of design patterns within the development team. In a front-end development project, it is important to carefully consider the purpose and benefits of each design pattern being used. Applying the pattern to the appropriate components or modules ensures that the pattern is used effectively and efficiently. Keeping the pattern implementation simple and focused on its specific purpose helps to avoid unnecessary complexity. Avoiding overuse or over-optimization of patterns ensures that the code remains maintainable and scalable. Finally, documenting and communicating the use of design patterns within the development team helps to ensure that everyone is on the same page and can effectively collaborate on the project. ## Exercise Instructions: - Describe two scenarios where following best practices when implementing design patterns in front-end development can lead to improved code quality and maintainability. ### Solution 1. Applying the MVC pattern to a complex front-end application: By carefully separating data and business logic from presentation, the MVC pattern can lead to a clean and organized codebase that is easy to understand and maintain. This can improve code quality and maintainability. 2. Implementing the Observer pattern for handling events and updates: By using the Observer pattern, front-end developers can handle events and updates in a reactive programming style. This can lead to more efficient and maintainable code, as well as improved scalability. # Conclusion: The future of front-end development and design patterns The future of front-end development and design patterns is promising. As front-end technologies continue to evolve and complex applications become the norm, the use of design patterns will remain crucial for streamlining development processes, improving code maintainability, and ensuring consistency across projects. By staying up-to-date with the latest design patterns and best practices, front-end developers can continue to leverage the power of design patterns to build scalable, efficient, and maintainable applications. ## Exercise Instructions: - Predict the future of front-end development and design patterns. - Describe how front-end development and design patterns will evolve in the next 5-10 years. ### Solution In the next 5-10 years, front-end development and design patterns are expected to continue evolving and adapting to new technologies and paradigms. Some potential trends include: - The adoption of more modular and component-based architectures: As front-end applications become more complex, the use of design patterns such as MVC, MVVM, and Factory will continue to be essential for organizing and managing code. - The rise of reactive programming and event-driven architectures: The Observer pattern and its derivatives will likely continue to be used in front-end development to handle events and updates in a reactive programming style. - The integration of machine learning and AI in front-end development: As machine learning and AI technologies become more accessible and integrated into front-end development, design patterns for managing and optimizing these technologies will likely emerge. - The continued importance of code quality and maintainability: As front-end applications become more complex and critical to business operations, the use of design patterns and best practices will remain crucial for ensuring code quality and maintainability.
Textbooks
Coseismic changes in subsurface structure associated with the 2018 Hokkaido Eastern Iburi Earthquake detected using autocorrelation analysis of ambient seismic noise Hiroki Ikeda1, 2 and Ryota Takagi1Email authorView ORCID ID profile Earth, Planets and Space201971:72 Autocorrelation analysis using ambient noise is a useful method to detect temporal changes in wave velocity and scattering property. In this study, we investigated the temporal changes in seismic wave velocity and scattering property in the focal region of the 2018 Hokkaido Eastern Iburi Earthquake. The autocorrelation function (ACF) was calculated by processing with bandpass filters to enhance 1–2 Hz frequency range, with aftershock removal, and applying the one-bit correlation technique. The stretching method was used to detect the seismic wave velocity change. After the mainshock, seismic velocity reductions were observed at many stations. At N.AMAH and ATSUMA, which are located close to the mainshock, we detected 2–3% decreases in seismic wave velocity. We compared parameters indicating strong ground motion and showed the possibility of correlations with peak dynamic strain and seismic velocity reduction. We also investigated the relationship between waveform correlation and lag time, using ACFs from before and after the mainshock, and detected distortion of the ACF waveform. The source of the waveform decorrelation was estimated to be located near the maximum coseismic slip, at around 30 km depth. Thus, distortion of the ACF waveform may reflect the formation of cracks, due to faulting at approximately 30 km depth. Seismic velocity changes Scatterer distribution change Autocorrelation function Hokkaido Eastern Iburi Earthquake Seismic interferometry Earthquakes, and their genesis processes, change the internal state of the Earth, via stress state changes, pore fluid movement, fractures around the fault, and shallow ground damage. The Earth's interior state affects seismic wave velocity and scatterer distribution, or scattering properties. Therefore, we can understand the temporal evolution of the Earth's interior state, when associated with earthquakes, better, by monitoring changes in the seismic wave propagation process over time. Seismic interferometry is a useful method with which to monitor temporal change in the seismic wave propagation process (e.g., Sens-Schönfelder and Wegler 2006). Seismic interferometry is a method to obtain Green's function between two seismic stations, by the computing cross-correlation functions of either ambient noise or coda waves. Repeating earthquakes and artificial explosions have also been used to detect seismic velocity changes associated with large earthquakes or volcanic activity (e.g., Nishimura et al. 2000; Poupinet et al. 1984). However, since repeating earthquakes do not occur frequently, and artificial explosions are expensive, the temporal and spatial resolution of velocity changes has been low in these studies. Methods using auto- and cross-correlation functions of the continuous ambient noise record (ACFs and CCFs) can estimate temporal changes in the velocity structure with better spatial and temporal resolution. Several studies have reported temporal changes in the velocity structure associated with large earthquakes, by applying seismic interferometry (e.g., Brenguier et al. 2008; Wegler et al. 2009). Regarding the seismic velocity change which accompanied large earthquakes, damage to the shallow subsurface resulting from strong ground motion has been found to contribute largely to the velocity reduction of the near surface layer (e.g., Hobiger et al. 2016; Nakata and Snieder 2011; Sawazaki and Snieder 2013; Takagi et al. 2012). In addition to the damage in shallow, subsurface layers, deformation and stress relaxation in the deep crust associated with large earthquakes have also been reported to have caused seismic velocity drops—and their recovery—after earthquakes (Brenguier et al. 2008; Chen et al. 2010). Moreover, several studies have reported that seismic velocity changed during earthquake swarm activities and also during slow slip events that did not generate strong ground motion (Maeda et al. 2010; Ueno et al. 2012; Rivet et al. 2011). Changes in subsurface scatterer distribution, and/or scattering property, can also be monitored with seismic interferometry. While seismic velocity changes cause phase shifts in the ACF and CCF waveforms, scattering property changes cause waveform shape changes, which can be measured through the reduced cross-correlation coefficient between waveforms measured before and after scattering property changes. Obermann et al. (2014) detected decreased correlation values between CCFs from before and after the 2008 Sichuan earthquake and located the area of the change in the scattering property near the fault zone. Chen et al. (2015) have reported changes to the repeating earthquake waveforms after the 1999 Chichi earthquake and attributed the changes to deep fault zone damage. The Mjma 6.7 Hokkaido Eastern Iburi Earthquake occurred on September 6, 2018; it generated strong ground motion, with the maximum seismic intensity reaching 7, the highest value in the Japan Meteorological Agency (JMA) scale (https://www.jma.go.jp/jma/en/Activities/inttable.html). The JMA estimated the depth of the hypocenters at 37 km, which was deeper than normal for inland earthquakes in Japan. The average depth of the Moho discontinuity in the central part of the northeaster Japan arc is ~ 35 km (e.g., Katsumata 2010). Kita et al. (2010, 2012) have shown that the low-velocity anomaly zone corresponding to the seismic velocity of the crustal rock (Vp < 7.2 km/s and Vs < 4.2 km/s) existed for a depth of 35–80 km under the Hidaka district of Hokkaido. Because of the complicated structure, this earthquake may have occurred at a depth of 37 km. The initial focal mechanism solution determined by the polarization of P-waves showed a strike-slip type fault, with a pressure axis extending from the northeast in a westerly direction to the southwest, whereas the centroid moment tensor solution showed a reverse fault type (National Research Institute for Earth Science and Disaster Resilience 2018)—a discrepancy which suggested a complex fault rupture process. In the study reported here, we detected temporal change in subsurface structures during the 2018 Hokkaido Eastern Iburi Earthquake, based on ACF analysis, and using ambient noise records. We focused on temporal changes, not only in seismic velocity, but also in scattering property. We computed ACFs for ambient noise and estimated temporal variations in seismic velocity according to the methods described in Yukutake et al. (2016), and Wegler et al. (2009). We used continuous, vertical-component waveform data, from 11 Hi-net stations managed by the National Research Institute for Earth Science and Disaster Resilience (NIED; National Research Institute for Earth Science and Disaster Resilience 2019b), and one JMA station (Fig. 1). The time period for the data analysis was from March 1 to October 31, 2018. Locations of seismic stations used in this study, and the seismicity from September 1 to October 31, 2018. The inset map shows the location of the study area Calculating ACFs We computed daily ACFs to detect temporal subsurface variations. Firstly, we divided 1-day records into 1-min time windows, with overlaps of 30 s; then, we applied a bandpass filter between 0.1–0.5 Hz, 1–3 Hz, and 2–8 Hz to the time-windowed data, after removing the linear trend and offset. We then carried out down-sampling from 100 to 20 Hz and one-bit normalization (after, e.g., Campillo and Paul 2003). We then computed ACFs for all time windows, and averaged them to obtain daily ACFs. The computed daily ACFs in the 0.1–0.5 Hz and 2–8 Hz ranges were unstable, and in the 0.1–0.5 Hz range, temporal phase fluctuations of the ACFs were too large to detect subtle change due to subsurface structural variation, which may have been caused by temporal change in the distribution sources of microseisms. In the 1–3 Hz and 2–8 Hz ranges, some stations showed monotonic behavior, with peak frequencies above 3 Hz. Therefore, we again applied a bandpass filter, between 1 and 2 Hz, to the ACFs in the 1–3 Hz range, and to obtain more stable ACFs, we stacked the ACFs for 1 week before the corresponding dates. Temporal changes in distribution sources of ambient noise have been reported as causing apparent changes to ACFs and in seismic velocity (Wegler et al. 2009). For detecting temporal changes in subsurface structure after large earthquakes, contamination of aftershocks in observed records may be the main factor changing the source distributions. Thus, we discarded the 1-min time windows containing earthquake signals, based on the standard deviation of the observed amplitude. We set threshold values as five times the median of the standard deviation during the whole observation period, and when standard deviations in a time window exceeded the thresholds, those time windows were not used to compute ACFs. Stretching method The stretching method was used to estimate velocity change (e.g., Sens-Schönfelder and Wegler 2006). The stretching method assumes a spatially homogenous velocity change. With this assumption, the time delay after seismic velocity change can be predicted as shown in (1), where dv/v is a velocity change ratio, t is the lag time of the ACF, and dt is the time shift in the ACF at t. $$dv / v = - dt / t$$ The ACF waveform is stretched or compressed by the predicted time delay and is then cross-correlated with a reference waveform. We can obtain the optimum value of dv/v by maximizing the cross-correlation coefficient between the stretched and reference ACFs. The reference ACF in this study was calculated from the mean of all ACFs before the mainshock. We performed grid searches for dv/v within the range from − 5 to 5%, with steps of 0.1%. Three lag time windows of 4–15 s, 4–9.5 s, and 9.5–15 s were examined. The dv/v measurement standard deviations were estimated using the following theoretical formula (Eq. (2) from Weaver et al. 2011). In (2), T is the inverse of the frequency band, \(t_{1}\) and \(t_{2}\) are the minimum value and the maximum value of the time window, respectively, \(\omega_{c}\) is the median value of the frequency, and CC is the correlation coefficient between the reference ACF and other ACFs. $$\sigma_{d} = \frac{{\sqrt {1 - CC^{2} } }}{2CC}\sqrt {\frac{{6\sqrt {\frac{\pi }{2}} T}}{{\omega_{c}^{2} \left( {t_{2}^{3} - t_{1}^{3} } \right)}}}$$ Detecting waveform distortion If the earthquake perturbs not only seismic velocity but also scattering property, by crack nucleation and/or fault zone damage, we can observe distortion of the ACF waveforms, in addition to the phase delay caused by velocity change. In order to quantify the waveform distortion, we compared averaged ACFs before and after the mainshock. We stretched or compressed the post-seismic ACF according to Eq. (1) and computed maximum correlation coefficients using moving time windows of 3 s, using 0.5-s steps. Waveform stretching corrects the phase delay due to seismic velocity change and thus enables detection of waveform distortion without phase shift. The moving time window allowed us to examine the relationship between lag time and waveform distortion. We detected changes to the ACFs after the mainshock. Figure 2 shows the calculated ACFs at seismic stations N.AMAH and ATSUMA: At N.AMAH, we found ACF phase delays at lag times 4–6 s after the mainshock, while at ATSUMA, we could also see ACF phase delays at lag times 4–9 s after the mainshock, and further confirmed the tendency of phase delay to increase with lag time. In addition to the phase delays, we found changes in the characteristics of the ACF waveform. The shape of the ACF waveform changed before and after the earthquake around lag times 6, 9, and 12 s, at N.AMAH, and around the lag time 10 s at ATSUMA. Around the lag times, simple homogeneous phase shifts based on Eq. (1) cannot explain the change in the ACF waveform. Calculated ACFs at a N.AMAH, and b ATSUMA seismic stations. The top panels show the averaged ACFs before (black) and after (red) the mainshock. The bottom panels show the 7-day averages of the ACFs. The red line in the bottom panels indicates the day the mainshock occurred The stretching method revealed the temporal changes to the dv/v at each station. Figure 3 shows the result of the stretching method with a time window of 4–15 s. The dv/v for all stations, except for N.KYMH, fluctuated within ± 0.5% of the value recorded before the earthquake. At N.AMAH, which was close to the main shock epicenter, the seismic velocity decreased by about 3% after the main shock. ATSUMA station also showed seismic velocity reduction, in this case by about 2%. Post-seismic dv/v recovery was not clear within the analysis period. Although a dv/v reduction exceeding 1% was not observed at other stations, the average values of the dv/v after the earthquake were smaller than the averages before the earthquake in N.MBWH, N.OIWH, and N.CTSH. In order to obtain the coseismic changes in dv/v, we averaged the dv/v after the mainshock in Fig. 3 using the errors of the individual measurements as a weighting factor. The average values and standard deviations of the average are listed in Table 1, and most of the stations show coseismic velocity decreases. Estimated temporal changes in dv/v over time, at each station. The color shows cross-correlation coefficients between the reference and individual ACFs. Only dv/v values with cross-correlation coefficients ≥ 0.6 are colored. The red line indicates the day the mainshock occurred Average values in dv/v and their standard deviations estimated from the three time windows, and computed indexes of strong ground motion, at each station 4.0–15.0 s 4.0–9.5 s Strong motion indexes dv/v PGV [m/s] [gal] [10−3] N.AMAH* − 2.61 ATSUMA N.MBWH N.CTSH N.YUBH N.OIWH N.HOBH N.KYMH N.MBEH N.SZNH N.BREH N.BRWH* *Indicates we used unscreened data from KiK-net Note that the 7-day averages of the ACFs before the corresponding dates were used to estimate the velocity changes. The 7-day moving average causes apparent time delay of the velocity change. In addition, data were missed just after the main shock for 2 days at N.AMAH and a half day at ATSUMA. The aftershock removal in the ACF also may reduce the available data soon after the mainshock. Although the velocity reduction occurred a few days after the main shock at N.AMAH and gradually occurred at ATSUMA, the delayed response can be explained by the aforementioned factors. The results of the stretching method depended on the time window used. Figure 4a, b shows the results of the stretching method, with the time windows 4–9.5 s and 9.5–15 s, at N.AMAH and ATSUMA, respectively. The magnitude of the dv/v differed when we used different time windows, which suggested that, at these two stations, the ACF changes could not be simply explained by a homogeneous velocity change. In addition, the correlation coefficient after the waveform stretching decreased after the earthquake (Figs. 3 and 4). This decoherence also suggested that not only seismic velocity, but also scattering properties, affected temporal ACF variation. Estimated temporal changes in dv/v over time, using respective time windows, at a N.AMAH, and b ATSUMA seismic stations Figure 5 shows the lag time dependency of the ACF waveform change, based on the correlation coefficient between the reference and post-seismic ACFs. The degradation characteristics of the correlation coefficient varied between seismic stations: It was seen to decrease significantly near the lag time of 10 s, at ATSUMA, and around 6.5 s, 10 s, and 12.5 s, at N.AMAH. The decrease in correlation values could also be confirmed from the ACFs shown in Fig. 2. In general, later lag times may have tended to have low correlation coefficients, due to a decreasing signal-to-noise ratio with increasing lag time. At ATSUMA station however, for example, the correlation coefficient became higher than 0.9, after the drop in the correlation coefficient around 10 s. Thus, the decreased correlation values have reflected changes to the subsurface structure. The relationship between cross-correlation coefficients and lag times for each station The reduction of dv/v after the mainshock Immediately after the mainshock on September 6, the dv/v was recorded to decrease by 2–3%, at two seismic stations close to the mainshock. Previous studies reported similar coseismic reductions, of dv/v by a few percent, at the focal region of large earthquakes (e.g., Hobiger et al. 2016). The frequency dependence of dv/v, and comparisons with vertical seismic array analyses, have indicated that such velocity reductions appear to concentrate in the shallow subsurface area, up to a few hundred meters deep (Hobiger et al. 2014; Takagi et al. 2012). Larger velocity decreases in the time window of 4.0–9.5 s than that of 9.5–15.0 s also imply that velocity decrease is mainly located in the shallow subsurface (Table 1). Thus, the dv/v decreases in the ACF estimated by the present study may be due to the shallow ground damage caused by strong ground motions. We compared the observed seismic velocity change with indexes of strong ground motion. Table 1 shows \(V_{s30}\) at each station, peak ground acceleration (PGA), peak ground velocity (PGV), and peak dynamic strain (PDS), caused by the strong motion of the mainshock. \(V_{s30}\) is the average S-wave velocity from the ground surface to 30 m and is defined as shown by Eq. (3), where \(d_{i}\) is the thickness of a layer and \(v_{{s_{i} }}\) is its S-wave velocity. $$V_{s30} = \frac{{30\, \left[ {\text{m}} \right]}}{{ \mathop \sum \nolimits_{i} (d_{i} /v_{{s_{i} }} )}}$$ We estimated PGA and PGV using strong motion records on the ground surface from KiK-net stations collocated with Hi-net stations (National Research Institute for Earth Science and Disaster Resilience 2019a). PDS is the maximum value of a dynamic strain change due to strong ground motion, which is estimated by dividing PGV by \(V_{s30}\) (Takagi and Okada 2012; Sawazaki and Snieder 2013). The station with the maximum PGA and PGV, N.OIWA, showed a dv/v decrease of 0.17%. N.AMAH, on the other hand, had smaller PGA and PGV than those of N.OIWH, with the PDS a maximum at N.AMAH. The maximum coseismic decrease in dv/v was 2.61% among these stations and observed at N.AMAH, indicating that there may be a better relationship between dv/v and PDS, as opposed to PGA and PGV. Moreover, the velocity drop with a similar magnitude to previous works, and correlation with PDS, implied that damage in the shallow layers due to strong ground motion was the main cause of the velocity drop. Hobiger et al. (2016) measured seismic velocity changes accompanying multiple, large earthquakes in Japan, and compared these with several indexes of strong motion. The data shown in Table 1 were consistent with the relationship trend between the seismic velocity changes and strong motion indexes that they estimated. However, since there is no simple linear relationship between PDS and the rate of dv/v, it is difficult to explain velocity changes using only PDS. For example, as suggested by the data from N.MBWH, N.CTSH, N.YUBH, N.OIWH, N.HOBH stations, although PDS values differed by almost one order of magnitude, their dv/v was the same degree, which suggested that susceptibility to velocity change varied with the ground and geological structure (Brenguier et al. 2014). Degradation of waveform correlation We examined the relationship between the lag times and correlation coefficients and found decreased waveform correlations before and after the mainshock. Such decorrelation of the ACF waveforms could be attributed to the changed subsurface scattering property. Another possible cause of the ACF waveform decorrelation is spatially inhomogeneous or localized velocity changes. Because we assumed spatially homogeneous velocity changes and thus homogeneous phase sifts of the ACFs in 3-s time windows, inhomogeneous phase shifts caused by inhomogeneous or localized velocity changes may result in the decorrelation of the ACF waveforms. For example, the ACFs of N.AMAH station show the phase delays in 4–6 s and phase advances in 7–9 s, the latter of which may be interpreted as localized velocity increase. Although such localized velocity changes may partly explain the observed decorrelation, the shapes and amplitudes of the ACFs around 10 s at ATSUMA, and around 9 s and later lag times at N.AMAH do not appear to be explained by inhomogeneous phase shifts. Thus, hereafter, we attribute the decorrelation to the scattering property changes and discuss the location of the changes. In order to locate the area showing scattering property change, we needed to distinguish the dominant ACF wave type. Obermann et al. (2013) showed that the body wave component dominated the surface wave in the latter part of CCF. Generally, it has been suggested that surface waves and body waves were included in the ambient noise of 1 Hz or more (Bonnefoy-Claudet et al. 2006). Takagi (2014) showed that the power spectral ratio of Rayleigh waves and P-waves approached 1, in the ACF of 1–2 Hz, vertical component, ambient noise. Roux et al. (2005) also showed that Rayleigh waves and P-waves were extracted from the CCF of ambient noise vertical components and that P-waves were dominant at 0.7 Hz and above. Since the possibility of the contribution of the surface wave cannot be completely excluded, by virtue of this research alone, it will be necessary to clarify the ACF wave field, by using, for example, dense array observations near the station where the change was detected. However, according to the previous studies noted above, it may be reasonable to assume that degradation of the waveform correlation was due to P-waves. In order to locate the area that corresponded with the waveform correlation degradation, we carried out the following analysis. First, we divided the target area into 0.5-km side cubic blocks and computed the two-way travel times from the seismic stations to cubic blocks. A one-dimensional (1D) JMA velocity structure was used for ray tracing (Ueno et al. 2002)—and note that we used the same 1D velocity structure as was used for hypocenter location by the JMA. Then, assuming that the ACFs comprised single backscattered P-waves, and thus regarding the lag times of the ACFs as the two-way travel times, we assigned the decorrelation values (1 − CC) of the corresponding lag times to the cubic blocks. We obtained spatial distribution of the decorrelation values by summing the contributions from all station used in this study. The larger the value of decorrelation, the more significant the influence on the waveform change. We found three areas with large decorrelation values, as shown in Fig. 6a, which were therefore the candidate regions for scattering property changes. Two areas were located near the northern edge of the aftershock distribution. The depths of the maximum values were 15 and 20 km, although they spread in the depth direction. The other area with large decorrelation value was located at 30 km depth, just west of the central part of the aftershock distribution. Since the average P-wave velocity was approximately 6.0 km/s, the large decorrelation area reflected correlation reductions of about 10 s, at N.AMAH and ATSUMA. a Analysis results of waveform decorrelation and distribution of seismicity at each depth. The blue color indicates decorrelation values within the blocks. Values of half of the maximum decorrelation value from this study, or more, are colored. Seismicity is plotted from September 6 to September 16, 2018. Each plot size indicates earthquake magnitude, and color indicates when the earthquake occurred. The triangles show locations of stations used in this study. b Figure reproduced from Japan Meteorological Agency (2018). The legends of the star and color scale in the original figure were translated into English from Japanese. The color contour shows the coseismic slip distribution estimated from near-field strong motion records. The black dots show the relocated hypocenters by using the double-difference method Figure 6b shows the coseismic slip distribution estimated from strong motion records and hypocenters relocated by applying the double-difference method (Japan Meteorological Agency 2018). The aftershocks were distributed in two, separated depth ranges of the fault plane: 10–15 km and 30–40 km. The coseismic slips were estimated between the two aftershock clusters. The maximum slip was located at 30 km, just above the central part of the deeper aftershock cluster. Geodetic data also suggested that the coseismic fault was modeled at 15–30 km (Geospatial Information Authority of Japan 2018). We found that the large decorrelation areas were located at the depth of 15–30 km. It is noteworthy that one large decorrelation areas, at 30 km, was close to the location of the maximum slip area. The spatial correlation suggested that the decreased ACF waveform correlation in this study was related to coseismic slip on the fault. One interpretation is that the coseismic fault rupture generated cracks within or around the fault zone and that this changed the scattering property, or scatterer distribution. In the focal region of the 2018 Hokkaido Eastern Iburi Earthquake, we detected temporal changes in seismic velocity and scattering property, based on autocorrelation analysis of ambient seismic noise. The stretching method, with the lag time window of 4.0–9.5 s, estimated seismic velocity reductions of 2–3% at two stations close to the epicenter. Coseismic velocity drops of similar magnitude (a few percent) have been previously reported in the shallow subsurface, down to a few hundred meters. Based on the relation between the values of PGA, PGV, and PDS, and the dv/v, we showed that the velocity change was more closely related to PDS, than to PGA or PGV. The amplitude of the velocity drop, and correlation with PDS, implied that damage in the shallow layer due to strong ground motion was the main cause of the velocity drop. We also detected waveform distortion of ACFs, before and after the mainshock. If the ACFs were composed of single, backscattered P-waves, the change in scattering property could be estimated where the maximum slip was estimated. This suggested that the coseismic fault rupture generated cracks around the fault, which changed the scattering property in and around the fault zone. ACF: CCF: cross-correlation function NIED: National Research Institute for Earth Science and Disaster Resilience PGA: peak ground acceleration PGV: peak ground velocity PDS: peak dynamic strain We used waveform data from seismic stations maintained by JMA and also used waveform data from Hi-net and KiK-net, maintained by NIED. Figure 6b is a reproduction of a figure made by JMA in the report of the Headquarters for Earthquake Research Promotion. We thank JMA and the Headquarters for Earthquake Research Promotion for the reuse permission. We plotted a figure using Genetic Mapping Tools (Wessel et al. 2013). We would like to thank Editage (www.editage.jp) for English language editing. We thank Editor Saeko Kita, Takuto Maeda, and an anonymous reviewer for their constructive comments. This study was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan, under its Earthquake and Volcano Hazards Observation and Research Program. This research was also supported by JSPS KAKENHI Grant Numbers 16K17788, 17H02950, and 18K19952. HI performed data analysis and prepared the manuscript. RT helped with interpretation and revised the manuscript. Both authors read and approved the final manuscript. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Research Center for Prediction of Earthquakes and Volcanic Eruptions, Graduate School of Science, Tohoku University, 6-6 Aza-Aoba, Aramaki, Aoba-ku, Sendai 980-8578, Japan Present address: LAC Co., Ltd, 2-16-1 Hirakawacho, Chiyoda-ku, Tokyo 102-0093, Japan Bonnefoy-Claudet S, Cornou C, Bard PY, Cotton F, Moczo P, Kristek J, Fah D (2006) H/V ratio: a tool for site effects evaluation. Results from 1-D noise simulations. Geophys J Int 167:827–837. https://doi.org/10.1111/j.1365-246X.2006.03154.x View ArticleGoogle Scholar Brenguier F, Campillo M, Hadziioannou C, Shapiro NM, Nadeau RM, Larose E (2008) Postseismic relaxation along the San Andreas fault at parkfield from continuous seismological observations. Science 321(5895):1478–1481. https://doi.org/10.1126/science.1160943 View ArticleGoogle Scholar Brenguier F, Campillo M, Takeda T, Aoki Y, Shapiro NM, Briand X, Emoto K, Miyake H (2014) Mapping pressurized volcanic fluids from induced crustal seismic velocity drops. Science 345(6192):80–82. https://doi.org/10.1126/science.1254073 View ArticleGoogle Scholar Campillo M, Paul A (2003) Long-range correlations in the diffuse seismic coda. Science 299(5606):547–549. https://doi.org/10.1126/science.1078551 View ArticleGoogle Scholar Chen JH, Froment B, Liu QY, Campillo M (2010) Distribution of seismic wave speed changes associated with the 12 May 2008 Mw 7.9 Wenchuan earthquake. Geophys Res Lett 37:L18302. https://doi.org/10.1029/2010gl044582 View ArticleGoogle Scholar Chen KH, Furumura T, Rubinstein J (2015) Near-surface versus fault zone damage following the 1999 Chi–Chi earthquake: observation and simulation of repeating earthquakes. J Geophys Res Solid Earth 120:2426–2445. https://doi.org/10.1002/2014JB011719 View ArticleGoogle Scholar Geospatial Information Authority of Japan (2018) Information of the 2018 Hokkaido Eastern Iburi earthquake. http://www.gsi.go.jp/BOUSAI/H30-hokkaidoiburi-east-earthquake-index.html#8. Accessed 9 Jan 2019 Hobiger M, Wegler U, Shiomi K, Nakahara H (2014) Single-station cross-correlation analysis of ambient seismic noise: application to stations in the surroundings of the 2008 Iwate–Miyagi Nairiku earthquake. Geophys J Int 198:90–109. https://doi.org/10.1093/gji/ggu115 View ArticleGoogle Scholar Hobiger M, Wegler U, Shiomi K, Nakahara H (2016) Coseismic and post-seismic velocity changes detected by passive image interferometry: comparison of one great and five strong earthquakes in Japan. Geophys J Int 205:1053–1073. https://doi.org/10.1093/gji/ggw066 View ArticleGoogle Scholar Japan Meteorological Agency (2018) The 2018 Hokkaido Eastern Iburi earthquake (Relocated hypocenter distribution by using the DD method). Evaluation of the 2018 Hokkaido Eastern Iburi earthquake (Published on 12 October 2018). https://www.static.jishin.go.jp/resource/monthly/2018/20180906_iburi_3.pdf. Accessed 9 Jan 2019 Katsumata A (2010) Depth of the Moho discontinuity beneath the Japanese islands estimated by traveltime analysis. J Geophys Res 115:B04303. https://doi.org/10.1029/2008JB005864 View ArticleGoogle Scholar Kita S, Okada T, Hasegawa A, Nakajima J, Matsuzawa T (2010) Anomalous deepening of a seismic belt in the upper-plane of the double seismic zone in the Pacific slab beneath the Hokkaido corner: possible evidence for thermal shielding caused by subducted forearc crust materials. Earth Planet Sci Lett 290:415–426. https://doi.org/10.1016/j.epsl.2009.12.038 View ArticleGoogle Scholar Kita S, Hasegawa A, Nakajima J, Okada T, Matsuzawa T, Katsumata K (2012) High-resolution seismic velocity structure beneath he Hokkaido corner, northern Japan: Arc-arc collision and origins of the 1970M 6.7 Hidaka and 1982M 7.1 Urakawa-oki earthquakes. J Geophys Res 117:B12301. https://doi.org/10.1029/2012jb009356 View ArticleGoogle Scholar Maeda T, Obara K, Yukutake Y (2010) Seismic velocity decrease and recovery related to earthquake swarms in a geothermal area. Earth Planets Space 62(9):685–691. https://doi.org/10.5047/eps.2010.08.006 View ArticleGoogle Scholar Nakata N, Snieder R (2011) Near-surface weakening in Japan after the 2011 Tohoku-Oki earthquake. Geophys Res Lett 38:L17302. https://doi.org/10.1029/2011GL048800 View ArticleGoogle Scholar National Research Institute for Earth Science and Disaster Resilience (2018) The 6 September 2018 earthquake in the center-eastern Iburi district: hypocenter distribution and fist motion focal mechanism. http://www.hinet.bosai.go.jp/topics/ishikari180906/?LANG=ja&m=summary. Accessed 14 Feb 2019 National Research Institute for Earth Science and Disaster Resilience (2019a) NIED K-NET, KiK-net. National Research Institute for Earth Science and Disaster Resilience, Tsukuba. https://doi.org/10.17598/NIED.0004 View ArticleGoogle Scholar National Research Institute for Earth Science and Disaster Resilience (2019b) NIED Hi-net. National Research Institute for Earth Science and Disaster Resilience, Tsukuba. https://doi.org/10.17598/nied.0003 View ArticleGoogle Scholar Nishimura T, Nakamachi H, Tanaka S, Sato M, Kobayashi T, Ueki S, Hamaguchi H, Ohtake M, Sato H (2000) Source process of very long period seismic events associated with the 1998 activity of Iwate Volcano, northeastern Japan. J Geophys Res 105(B8):19135–19147. https://doi.org/10.1029/2000JB900155 View ArticleGoogle Scholar Obermann A, Planes T, Larose E, Sens-Schönfelder C, Campillo M (2013) Depth sensitivity of seismic coda waves to velocity perturbations in an elastic heterogeneous medium. Geophys J Int 1:11. https://doi.org/10.1093/gji/ggt043 View ArticleGoogle Scholar Obermann A, Froment B, Campillo M, Larose E, Planes T, Valette B, Chen JH, Liu QY (2014) Seismic noise correlations to image structural and mechanical changes associated with the Mw 7.9 2008 Wenchuan earthquake. J Geophys Res 119:3155–3168. https://doi.org/10.1002/2013JB010932 View ArticleGoogle Scholar Poupinet G, Ellsworth WL, Frechet J (1984) Monitoring velocity variations in the crust using earthquake doublets: an application to the Calaveras fault, California. J Geophys 89(B7):5719–5731View ArticleGoogle Scholar Rivet D, Campillo M, Shapiro NM, Cruz-Atienza V, Radiguet M, Cotte N, Kostoglodov V (2011) Seismic evidence of nonlinear crustal deformation during a large slow slip event in Mexico. Geophys Res Lett 38:L08308. https://doi.org/10.1029/2011GL047151 View ArticleGoogle Scholar Roux P, Sabra KG, Gerstoft P, Kuperman WA (2005) P-waves from cross-correlation of seismic noise. Geophys Res Lett 32:L19303. https://doi.org/10.1029/2005GL023803 View ArticleGoogle Scholar Sawazaki K, Snieder R (2013) Time-lapse changes of P- and S-wave velocities and shear wave splitting in the first year after the 2011 Tohoku earthquake, Japan: shallow subsurface. Geophys J Int 193:238–251. https://doi.org/10.1093/gji/ggs080 View ArticleGoogle Scholar Sens-Schönfelder C, Wegler U (2006) Passive image interferometry and seasonal variations of seismic velocities at Merapi Volcano, Indonesia. Geophys Res Lett 33:L21302. https://doi.org/10.1029/2006GL027797 View ArticleGoogle Scholar Takagi R (2014) Development in seismic interferometry for subsurface monitoring—an application to the 2011 Tohoku-oki earthquake. Ph.D. thesis, Department of Geophysics, Graduate School of Science, Tohoku UniversityGoogle Scholar Takagi R, Okada T (2012) Temporal change in shear velocity and polarization anisotropy related to the 2011 M90 Tohoku-Oki earthquake examined using KiK-net vertical array data. Geophys Res Lett 39:L09310. https://doi.org/10.1029/2012gl051342 View ArticleGoogle Scholar Takagi R, Okada T, Nakahara H, Umino N, Hasegawa A (2012) Coseismic velocity change in and around the focal region of the 2008 Iwate–Miyagi Nairiku earthquake. J Geophys Res 117:B06315. https://doi.org/10.1029/2012JB009252 View ArticleGoogle Scholar Ueno H, Hatakeyama S, Aketagawa T, Funasaki J, Hamada N (2002) Improvement of hypocenter determination procedures in the Japan Meteorological Agency (in Japanese with English abstract). Q J Seismol 65:123–134Google Scholar Ueno T, Saito T, Shiomi K, Enescu B, Hirose H, Obara K (2012) Fractional seismic velocity change related to magma intrusions during earthquake swarms in the eastern Izu peninsula, central Japan. J Geophys Res 117:B12305. https://doi.org/10.1029/2012JB009580 View ArticleGoogle Scholar Weaver RL, Hadziioannou C, Larose E, Campillo M (2011) On the precision of noise correlation interferometry. Geophys J Int 168(3):1029–1033. https://doi.org/10.1111/j.1365-246X.2011.05015.x View ArticleGoogle Scholar Wegler U, Nakahara H, Sens-Schönfelder C, Korn M, Shiomi K (2009) Sudden drop of seismic velocity after the 2004 Mw 6.6 mid-Niigata earthquake, Japan, observed with passive image interferometry. J Geophys Res 114:B06305. https://doi.org/10.1029/2008jb005869 View ArticleGoogle Scholar Wessel P, Smith WHF, Scharroo R, Luis J, Wobbe F (2013) Generic mapping tools: improved version released. Eos Trans Am Geophys Union 94(45):409–410View ArticleGoogle Scholar Yukutake Y, Ueno T, Miyaoka K (2016) Determination of temporal changes in seismic velocity caused by volcanic activity in and around Hakone volcano, central Japan, using ambient seismic noise records. Progr Earth Planet Sci 3:29. https://doi.org/10.1186/s40645-016-0106-5 View ArticleGoogle Scholar 4. Seismology The 2018 Hokkaido Eastern Iburi Earthquake and Hidaka arc-arc collision ...
CommonCrawl
Hexagonal tiling In geometry, the hexagonal tiling or hexagonal tessellation is a regular tiling of the Euclidean plane, in which exactly three hexagons meet at each vertex. It has Schläfli symbol of {6,3} or t{3,6} (as a truncated triangular tiling). Hexagonal tiling TypeRegular tiling Vertex configuration6.6.6 (or 63) Face configurationV3.3.3.3.3.3 (or V36) Schläfli symbol(s){6,3} t{3,6} Wythoff symbol(s)3 | 6 2 2 6 | 3 3 3 3 | Coxeter diagram(s) Symmetryp6m, [6,3], (*632) Rotation symmetryp6, [6,3]+, (632) DualTriangular tiling PropertiesVertex-transitive, edge-transitive, face-transitive English mathematician John Conway called it a hextille. The internal angle of the hexagon is 120 degrees, so three hexagons at a point make a full 360 degrees. It is one of three regular tilings of the plane. The other two are the triangular tiling and the square tiling. Applications Hexagonal tiling is the densest way to arrange circles in two dimensions. The honeycomb conjecture states that hexagonal tiling is the best way to divide a surface into regions of equal area with the least total perimeter. The optimal three-dimensional structure for making honeycomb (or rather, soap bubbles) was investigated by Lord Kelvin, who believed that the Kelvin structure (or body-centered cubic lattice) is optimal. However, the less regular Weaire–Phelan structure is slightly better. This structure exists naturally in the form of graphite, where each sheet of graphene resembles chicken wire, with strong covalent carbon bonds. Tubular graphene sheets have been synthesised, known as carbon nanotubes. They have many potential applications, due to their high tensile strength and electrical properties. Silicene is similar. Chicken wire consists of a hexagonal lattice (often not regular) of wires. • The densest circle packing is arranged like the hexagons in this tiling • Chicken wire fencing • Graphene • A carbon nanotube can be seen as a hexagon tiling on a cylindrical surface • Hexagonal Persian tile c.1955 The hexagonal tiling appears in many crystals. In three dimensions, the face-centered cubic and hexagonal close packing are common crystal structures. They are the densest sphere packings in three dimensions. Structurally, they comprise parallel layers of hexagonal tilings, similar to the structure of graphite. They differ in the way that the layers are staggered from each other, with the face-centered cubic being the more regular of the two. Pure copper, amongst other materials, forms a face-centered cubic lattice. Uniform colorings There are three distinct uniform colorings of a hexagonal tiling, all generated from reflective symmetry of Wythoff constructions. The (h,k) represent the periodic repeat of one colored tile, counting hexagonal distances as h first, and k second. The same counting is used in the Goldberg polyhedra, with a notation {p+,3}h,k, and can be applied to hyperbolic tilings for p > 6. k-uniform 1-uniform 2-uniform 3-uniform Symmetry p6m, (*632) p3m1, (*333) p6m, (*632) p6, (632) Picture Colors 1 2 3 2 4 2 7 (h,k) (1,0) (1,1) (2,0) (2,1) Schläfli {6,3} t{3,6} t{3[3]} Wythoff 3 | 6 2 2 6 | 3 3 3 3 | Coxeter Conway H tΔ cH=t6daH wH=t6dsH The 3-color tiling is a tessellation generated by the order-3 permutohedrons. Chamfered hexagonal tiling A chamfered hexagonal tiling replacing edges with new hexagons and transforms into another hexagonal tiling. In the limit, the original faces disappear, and the new hexagons degenerate into rhombi, and it becomes a rhombic tiling. Hexagons (H) Chamfered hexagons (cH) Rhombi (daH) Related tilings The hexagons can be dissected into sets of 6 triangles. This process leads to two 2-uniform tilings, and the triangular tiling: Regular tiling Dissection 2-uniform tilings Regular tiling Inset Dual Tilings Original 1/3 dissected 2/3 dissected fully dissected E to IH to FH to H The hexagonal tiling can be considered an elongated rhombic tiling, where each vertex of the rhombic tiling is stretched into a new edge. This is similar to the relation of the rhombic dodecahedron and the rhombo-hexagonal dodecahedron tessellations in 3 dimensions. Rhombic tiling Hexagonal tiling Fencing uses this relation It is also possible to subdivide the prototiles of certain hexagonal tilings by two, three, four or nine equal pentagons: Pentagonal tiling type 1 with overlays of regular hexagons (each comprising 2 pentagons). pentagonal tiling type 3 with overlays of regular hexagons (each comprising 3 pentagons). Pentagonal tiling type 4 with overlays of semiregular hexagons (each comprising 4 pentagons). Pentagonal tiling type 3 with overlays of two sizes of regular hexagons (comprising 3 and 9 pentagons respectively). Symmetry mutations This tiling is topologically related as a part of a sequence of regular tilings with hexagonal faces, starting with the hexagonal tiling, with Schläfli symbol {6,n}, and Coxeter diagram , progressing to infinity. *n62 symmetry mutation of regular tilings: {6,n} Spherical Euclidean Hyperbolic tilings {6,2} {6,3} {6,4} {6,5} {6,6} {6,7} {6,8} ... {6,∞} This tiling is topologically related to regular polyhedra with vertex figure n3, as a part of a sequence that continues into the hyperbolic plane. *n32 symmetry mutation of regular tilings: {n,3} Spherical Euclidean Compact hyperb. Paraco. Noncompact hyperbolic {2,3} {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} {12i,3} {9i,3} {6i,3} {3i,3} It is similarly related to the uniform truncated polyhedra with vertex figure n.6.6. *n32 symmetry mutation of truncated tilings: n.6.6 Sym. *n42 [n,3] Spherical Euclid. Compact Parac. Noncompact hyperbolic *232 [2,3] *332 [3,3] *432 [4,3] *532 [5,3] *632 [6,3] *732 [7,3] *832 [8,3]... *∞32 [∞,3] [12i,3] [9i,3] [6i,3] Truncated figures Config. 2.6.6 3.6.6 4.6.6 5.6.6 6.6.6 7.6.6 8.6.6 ∞.6.6 12i.6.6 9i.6.6 6i.6.6 n-kis figures Config. V2.6.6 V3.6.6 V4.6.6 V5.6.6 V6.6.6 V7.6.6 V8.6.6 V∞.6.6 V12i.6.6 V9i.6.6 V6i.6.6 This tiling is also part of a sequence of truncated rhombic polyhedra and tilings with [n,3] Coxeter group symmetry. The cube can be seen as a rhombic hexahedron where the rhombi are squares. The truncated forms have regular n-gons at the truncated vertices, and nonregular hexagonal faces. Symmetry mutations of dual quasiregular tilings: V(3.n)2 *n32 Spherical Euclidean Hyperbolic *332 *432 *532 *632 *732 *832... *∞32 Tiling Conf. V(3.3)2 V(3.4)2 V(3.5)2 V(3.6)2 V(3.7)2 V(3.8)2 V(3.∞)2 Wythoff constructions from hexagonal and triangular tilings Like the uniform polyhedra there are eight uniform tilings that can be based on the regular hexagonal tiling (or the dual triangular tiling). Drawing the tiles colored red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms, 7 of which are topologically distinct. (The truncated triangular tiling is topologically identical to the hexagonal tiling.) Uniform hexagonal/triangular tilings Fundamental domains Symmetry: [6,3], (*632) [6,3]+, (632) {6,3} t{6,3} r{6,3} t{3,6} {3,6} rr{6,3} tr{6,3} sr{6,3} Config. 63 3.12.12 (6.3)2 6.6.6 36 3.4.6.4 4.6.12 3.3.3.3.6 Monohedral convex hexagonal tilings There are 3 types of monohedral convex hexagonal tilings.[1] They are all isohedral. Each has parametric variations within a fixed symmetry. Type 2 contains glide reflections, and is 2-isohedral keeping chiral pairs distinct. 3 types of monohedral convex hexagonal tilings 123 p2, 2222pgg, 22×p2, 2222p3, 333 b = e B + C + D = 360° b = e, d = f B + C + E = 360° a = f, b = c, d = e B = D = F = 120° 2-tile lattice 4-tile lattice 3-tile lattice Topologically equivalent tilings Hexagonal tilings can be made with the identical {6,3} topology as the regular tiling (3 hexagons around every vertex). With isohedral faces, there are 13 variations. Symmetry given assumes all faces are the same color. Colors here represent the lattice positions.[2] Single-color (1-tile) lattices are parallelogon hexagons. 13 isohedrally-tiled hexagons pg (××)p2 (2222)p3 (333)pmg (22*) pgg (22×)p31m (3*3)p2 (2222)cmm (2*22)p6m (*632) Other isohedrally-tiled topological hexagonal tilings are seen as quadrilaterals and pentagons that are not edge-to-edge, but interpreted as colinear adjacent edges: Isohedrally-tiled quadrilaterals pmg (22*)pgg (22×)cmm (2*22)p2 (2222) Parallelogram Trapezoid Parallelogram Rectangle Parallelogram Rectangle Rectangle Isohedrally-tiled pentagons p2 (2222)pgg (22×)p3 (333) The 2-uniform and 3-uniform tessellations have a rotational degree of freedom which distorts 2/3 of the hexagons, including a colinear case that can also be seen as a non-edge-to-edge tiling of hexagons and larger triangles.[3] It can also be distorted into a chiral 4-colored tri-directional weaved pattern, distorting some hexagons into parallelograms. The weaved pattern with 2 colored faces has rotational 632 (p6) symmetry. A chevron pattern has pmg (22*) symmetry, which is lowered to p1 (°) with 3 or 4 colored tiles. Regular Gyrated Regular Weaved Chevron p6m, (*632) p6, (632) p6m (*632) p6 (632) p1 (°) p3m1, (*333) p3, (333) p6m (*632) p2 (2222) p1 (°) Circle packing The hexagonal tiling can be used as a circle packing, placing equal-diameter circles at the center of every point. Every circle is in contact with 3 other circles in the packing (kissing number).[4] The gap inside each hexagon allows for one circle, creating the densest packing from the triangular tiling, with each circle in contact with a maximum of 6 circles. Related regular complex apeirogons There are 2 regular complex apeirogons, sharing the vertices of the hexagonal tiling. Regular complex apeirogons have vertices and edges, where edges can contain 2 or more vertices. Regular apeirogons p{q}r are constrained by: 1/p + 2/q + 1/r = 1. Edges have p vertices, and vertex figures are r-gonal.[5] The first is made of 2-edges, three around every vertex, the second has hexagonal edges, three around every vertex. A third complex apeirogon, sharing the same vertices, is quasiregular, which alternates 2-edges and 6-edges. 2{12}3 or 6{4}3 or See also Wikimedia Commons has media related to Order-3 hexagonal tiling. • Hexagonal lattice • Hexagonal prismatic honeycomb • Tilings of regular polygons • List of uniform tilings • List of regular polytopes • Hexagonal tiling honeycomb • Hex map board game design References 1. Tilings and Patterns, Sec. 9.3 Other Monohedral tilings by convex polygons 2. Tilings and Patterns, from list of 107 isohedral tilings, pp. 473–481 3. Tilings and patterns, uniform tilings that are not edge-to-edge 4. Order in Space: A design source book, Keith Critchlow, pp. 74–75, pattern 2 5. Coxeter, Regular Complex Polytopes, pp. 111–112, p. 136. • Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, ISBN 0-486-61480-8 p. 296, Table II: Regular honeycombs • Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. New York: W. H. Freeman. ISBN 0-7167-1193-1. (Chapter 2.1: Regular and uniform tilings, pp. 58–65) • Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. p. 35. ISBN 0-486-23729-X. • John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 External links • Weisstein, Eric W. "Hexagonal Grid". MathWorld. • Weisstein, Eric W. "Regular tessellation". MathWorld. • Weisstein, Eric W. "Uniform tessellation". MathWorld. • Klitzing, Richard. "2D Euclidean tilings o3o6x – hexat – O3". Fundamental convex regular and uniform honeycombs in dimensions 2–9 Space Family ${\tilde {A}}_{n-1}$ ${\tilde {C}}_{n-1}$ ${\tilde {B}}_{n-1}$ ${\tilde {D}}_{n-1}$ ${\tilde {G}}_{2}$ / ${\tilde {F}}_{4}$ / ${\tilde {E}}_{n-1}$ E2 Uniform tiling {3[3]} δ3 hδ3 qδ3 Hexagonal E3 Uniform convex honeycomb {3[4]} δ4 hδ4 qδ4 E4 Uniform 4-honeycomb {3[5]} δ5 hδ5 qδ5 24-cell honeycomb E5 Uniform 5-honeycomb {3[6]} δ6 hδ6 qδ6 E6 Uniform 6-honeycomb {3[7]} δ7 hδ7 qδ7 222 E7 Uniform 7-honeycomb {3[8]} δ8 hδ8 qδ8 133 • 331 E8 Uniform 8-honeycomb {3[9]} δ9 hδ9 qδ9 152 • 251 • 521 E9 Uniform 9-honeycomb {3[10]} δ10 hδ10 qδ10 E10 Uniform 10-honeycomb {3[11]} δ11 hδ11 qδ11 En-1 Uniform (n-1)-honeycomb {3[n]} δn hδn qδn 1k2 • 2k1 • k21 Tessellation Periodic • Pythagorean • Rhombille • Schwarz triangle • Rectangle • Domino • Uniform tiling and honeycomb • Coloring • Convex • Kisrhombille • Wallpaper group • Wythoff Aperiodic • Ammann–Beenker • Aperiodic set of prototiles • List • Einstein problem • Socolar–Taylor • Gilbert • Penrose • Pentagonal • Pinwheel • Quaquaversal • Rep-tile and Self-tiling • Sphinx • Socolar • Truchet Other • Anisohedral and Isohedral • Architectonic and catoptric • Circle Limit III • Computer graphics • Honeycomb • Isotoxal • List • Packing • Problems • Domino • Wang • Heesch's • Squaring • Dividing a square into similar rectangles • Prototile • Conway criterion • Girih • Regular Division of the Plane • Regular grid • Substitution • Voronoi • Voderberg By vertex type Spherical • 2n • 33.n • V33.n • 42.n • V42.n Regular • 2∞ • 36 • 44 • 63 Semi- regular • 32.4.3.4 • V32.4.3.4 • 33.42 • 33.∞ • 34.6 • V34.6 • 3.4.6.4 • (3.6)2 • 3.122 • 42.∞ • 4.6.12 • 4.82 Hyper- bolic • 32.4.3.5 • 32.4.3.6 • 32.4.3.7 • 32.4.3.8 • 32.4.3.∞ • 32.5.3.5 • 32.5.3.6 • 32.6.3.6 • 32.6.3.8 • 32.7.3.7 • 32.8.3.8 • 33.4.3.4 • 32.∞.3.∞ • 34.7 • 34.8 • 34.∞ • 35.4 • 37 • 38 • 3∞ • (3.4)3 • (3.4)4 • 3.4.62.4 • 3.4.7.4 • 3.4.8.4 • 3.4.∞.4 • 3.6.4.6 • (3.7)2 • (3.8)2 • 3.142 • 3.162 • (3.∞)2 • 3.∞2 • 42.5.4 • 42.6.4 • 42.7.4 • 42.8.4 • 42.∞.4 • 45 • 46 • 47 • 48 • 4∞ • (4.5)2 • (4.6)2 • 4.6.12 • 4.6.14 • V4.6.14 • 4.6.16 • V4.6.16 • 4.6.∞ • (4.7)2 • (4.8)2 • 4.8.10 • V4.8.10 • 4.8.12 • 4.8.14 • 4.8.16 • 4.8.∞ • 4.102 • 4.10.12 • 4.122 • 4.12.16 • 4.142 • 4.162 • 4.∞2 • (4.∞)2 • 54 • 55 • 56 • 5∞ • 5.4.6.4 • (5.6)2 • 5.82 • 5.102 • 5.122 • (5.∞)2 • 64 • 65 • 66 • 68 • 6.4.8.4 • (6.8)2 • 6.82 • 6.102 • 6.122 • 6.162 • 73 • 74 • 77 • 7.62 • 7.82 • 7.142 • 83 • 84 • 86 • 88 • 8.62 • 8.122 • 8.162 • ∞3 • ∞4 • ∞5 • ∞∞ • ∞.62 • ∞.82
Wikipedia
\begin{document} \title{\LARGE \bf Advertising Competitions in Social Networks} \thispagestyle{empty} \pagestyle{empty} \begin{abstract} In the present work, we study the advertising competition of several marketing campaigns who need to determine how many resources to allocate to potential customers to advertise their products through direct marketing while taking into account that competing marketing campaigns are trying to do the same. Potential customers rank marketing campaigns according to the offers, promotions or discounts made to them. Taking into account the intrinsic value of potential customers as well as the peer influence that they exert over other potential customers we consider the network value as a measure of their importance in the market and we find an analytical expression for it. We analyze the marketing campaigns competition from a game theory point of view, finding a closed form expression of the symmetric equilibrium offer strategy for the marketing campaigns from which no campaign has any incentive to deviate. We also present several scenarios, such as Winner-takes-all and Borda, but not the only possible ones for which our results allow us to retrieve in a simple way the corresponding equilibrium strategy. \end{abstract} \section{INTRODUCTION} In the internet age, direct marketing, which promotes a product or service exclusively to potential customers likely to be profitable, has brought the attention of marketing campaigns replacing in some instances and complementing in others the traditional mass marketing which promotes a product or service indiscriminately to all potential customers. In the context of direct marketing, Domingos and Richardson~\cite{DomingosR2001, RichardsonD2002} considered the {\sl network value of a customer} by incorporating the influence of peers on the decision making process of potential customers deciding between different products or services promoted by competing marketing campaigns. If each potential customer makes a buying decision independently of every other potential customer, we should only consider his intrinsic value, i.e. the expected profit from sales to him. However, when we consider the often strong influence potential customers exert on their peers, friends, etc., we have to incorporate this influence to their network value. Most of the existing state of the art considers that there is an incumbent that holds the market and a challenger who needs to allocate advertisement through direct marketing for certain individuals at a given cost of adoption to promote the challenger product or service. However, the cost of adoption is unknown for most potential customers. In the present work, our focus is on how many resources to allocate to potential customers, while knowing that competing marketing campaigns are doing the same, for them to adopt one marketing campaign versus another. We are interested on the scenario when several competing marketing campaigns need to simultaneously and independently decide how many resources to allocate to potential customers to advertise their products while most of the state-of-the-art focus in only one marketing campaign (the non-simultaneous case is also analyzed). The process and dynamics by which influence is spread is given by the voter model. \subsection{Related Works} The general problem of influence maximization was first introduced by Domingos and Richardson~\cite{DomingosR2001,RichardsonD2002}. Based on the results of Nemhauser et al.~\cite{NemhauserWF1978}, Kempe et al.~\cite{Kempe2003,Kempe2005} provided a greedy $(1-1/e-\varepsilon)$-approximation algorithm for the spread maximizing set. A slightly different model but with similar flavor, the voter model, was introduced by Clifford and Sudbury~\cite{CliffordS1973} and Holley and Liggett~\cite{HolleyL1975}. In that model of social networks, Even-Dar and Shapira~\cite{EvenDar2007} found an exact solution to the spread maximization set. In this work, we focus on this model of social networks since even if the solutions are not always simple, we can find them explicitly. Competitive influence in social networks has been studied in other scenarios. Bharathi et al.~\cite{BharathiKS2007} proposed a generalization of the independent cascade model of social networks and gave a $(1-1/e)$ approximation algorithm for computing the best response to an already known opponent's strategy. Sanjeev and Kearns~\cite{GoyalK2012} studied the case of two players simultaneously choosing some nodes to initially seed while considering two independent functions for the consumers denoted switching function and selection function. Borodin et al.~\cite{BorodinFO2010} showed that for a broad family of competitive influence models is NP-hard to achieve an approximation that is better that the square root of the optimal solution. Chasparis and Shamma~\cite{ChasparisS2010} found optimal advertising policies using dynamic programming on some particular models of social networks. Within the general context of competitive contests, there is an extensive literature (see e.g.~\cite{GrossW1950,Roberson2006,masucciS2014,MasucciS2015}). To study competitive contests, we use recent advances of game theory, and in particular of Colonel Blotto games. The Colonel Blotto game, was first solved for the case of two generals and three battlefields by Borel~\cite{Borel1921,BorelV1938}. For the case of equally valued battlefields, also known as homogeneous battlefields case, this result was generalized for any number of battlefields by Gross and Wagner~\cite{GrossW1950}. Gross~\cite{Gross1950} proved the existence and a method to construct the joint probability distribution. Laslier and Picard~\cite{LaslierP2002} provided alternative methods to construct the joint distribution by extending the method proposed by Gross and Wagner~\cite{GrossW1950}. Roberson~\cite{Roberson2006} focused on the case of two generals, homogeneous battlefields and different budgets (also known as asymmetric budgets case). Friedman~\cite{Friedman1958} studied the Nash equilibrium and best response function for the asymmetric budgets case with two generals. The case of two generals and where for each distinct value there are at least three battlefields with the same value was stated and solved by Roberson~\cite{Roberson2010} and Shwartz et al.~\cite{SchwartzLS2014}. In the context of voting systems, Myerson~\cite{myerson1993} found the solution for the case for equally valued battlefields with ranking scores for any number of candidates. The plan of this work is as follows. In Section~\ref{sec:model} we describe the model that we are considering. In Section~\ref{sec:results} we give the main results that we have obtained. In Section~\ref{sec:simulations} we give simulations on some scenarios and in Section~\ref{sec:conclusions} we conclude and describe future extensions of our work. \section{MODEL}\label{sec:model} Consider the set of marketing campaigns~\mbox{$\mathcal{K}=\{1,2,\ldots,K\}$} that need to allocate a certain budget, denoted by~$B$, across a set of potential customers \mbox{$\mathcal{V}=\{1,2,\ldots,N\}$} through offers (or promotions or discounts). Each potential customer indicates his preferences through a ranking (defined in the following subsection) of the $K$ products or services promoted by the marketing campaigns. For $n\in\mathcal{V}$, we denote by $w_n$ the intrinsic value of potential customer~$n$ and by $W=\sum_{n\in\mathcal{V}} w_n$ the total intrinsic value of the set of potential customers. Similarly, we denote by $v_n$ the network value (to be determined) of potential customer~$n\in\mathcal{V}$ and by $V=\sum_{n\in\mathcal{V}} v_n$ the total network value of the set of potential customers. To avoid specifying the number of potential customers and dealing with the complexities of large finite numbers, we consider the number of potential customers to be essentially infinite. We should, however, interpret such an infinite model as an approximation to a large finite population with hundreds or thousands of potential customers. We assume that campaigns' offers are independent across individual potential customers, so that no potential customer's offers have any specific relationship with any other set of potential customers' offers. This offers' independence assumption greatly simplifies our analysis, because it allows us to completely characterize a marketing campaign's promises by the marginal distribution of his offers to potential customers, without saying anything more about the joint distribution of offers to various sets of potential customers. The infinite-population assumption (suggested and used in~\cite{myerson1993}) was introduced above essentially only to justify this simplifying assumption of offers' independence across potential customers. Each marketing campaign's budget constraint is expressed as a constraint on the average offer per potential customer that a marketing campaign can promise. Specifically, we assume here that each marketing campaign's offer distribution for potential customer~$n$ must have mean~$B v_n/V$ to be considered credible by potential customer~$n$. The reason is that budget $B$ should be allocated across $N$ potential customers and each potential customer~$n$ has relative value~$v_n/V$. With a finite population of $N$ potential customers, and with a fixed budget of $B$ dollars to be allocated, marketing campaign promises could not be independent across all potential customers, because the offers to all potential customers would have to sum the given budget~$B$. However, due to Kolmogorov's strong law of large numbers, as the number of potential customers $N$ increases, the sum of independently distributed offers with high probability will converge to the budget~$B$. Indeed, if the mean of the campaign's offer distribution for potential customer $n\in\mathcal{V}$ is given by $Bv_n/V$ and the support of the distribution is bounded then, for any small positive number~$\varepsilon$, $N$ potential customers' offers that are drawn independently from the campaign's distribution would have probability less than $\varepsilon$ of totalling more than $(1+\varepsilon)\sum_{n\in\mathcal{V}}Bv_n/V=B(1+\varepsilon)$, when $N$ is sufficiently large. Thus, taking the limit as the population goes to infinity, we can assume that each campaign makes independent offers to every potential customer and the budget constraint will hold with high probability. The potential customers and their influence relationships can be modeled as an undirected graph with self-loops~$\mathcal{G}=(\mathcal{V},\mathcal{E})$ where $\mathcal{V}$ is the set of nodes which represent the potential customers and $\mathcal{E}$ is the set of edges which represent the mutual influence between potential customers. \subsection*{Notation} Part of the notation is summarized in Table~\ref{table:notation}. We denote by~$\lvert\mathcal{A}\rvert$ the cardinality of set~$\mathcal{A}$. We denote by index $k$ one of the marketing campaigns and by index $-k$ the competing (or set of competing) marketing campaign(s) to~$k$. For a potential customer $n\in\mathcal{V}$, we denote by $\mathcal{N}(n)$ the set of neighbors of $n$ in graph~$\mathcal{G}$, i.e. \mbox{$\mathcal{N}(n)=\{m\in\mathcal{V}: \{n,m\}\in\mathcal{E}\}$}. \begin{table*} \caption{Notation} \label{table:notation} \centering \begin{tabular}{|c|l|} \hline $\mathcal{V}=\{1,2,\ldots,N\}$ & Set of potential customers\\ \hline $\mathcal{K}=\{1,2,\ldots,K\}$ & Set of marketing campaigns\\ \hline $B$ & Total budget of marketing campaigns\\ \hline $w_n$ & Intrinsic value of potential customer $n$\\ \hline $v_n$ & Network value of potential customer $n$\\ \hline $W=\sum_{n\in\mathcal{V}}w_n$ & Total intrinsic value of potential customers\\ \hline $V=\sum_{n\in\mathcal{V}}v_n$ & Total network value of potential customers\\ \hline $\mathcal{G}=(\mathcal{V},\mathcal{E})$ & Graph of influence relationships\\ \hline $M$ & Normalized transition matrix of~$\mathcal{G}$\\ \hline $(s_1,s_2,\ldots,s_K)$ such that & \multirow{3}{*}{Normalized rank-scoring rule}\\ $s_1\ge s_2\ge\ldots\ge s_K=0$, & \\ $\sum_{j\in\mathcal{K}}s_j=1$ & \\ \hline $x_{k,n}$ & Offer of campaign~$k$ to customer~$n$\\ \hline $\mathbf{x}_{k,\scalerel*{\cdot}{\bigodot}}$ & Vector of offers of marketing campaign~$k$\\ \hline $\mathbf{X}=\{x_{k,n}\}_{k\in\mathcal{K},n\in\mathcal{V}}$ & Matrix of offers\\ \hline $\mathbf{X}_{-k,\scalerel*{\cdot}{\bigodot}}$ & Matrix of offers of competing campaigns\\ \hline $\pi^{\mathrm{INT}}$ & Intrinsic payoff function\\ \hline $\pi$ & (Network) payoff function\\ \hline $u^t_n(\cdot)$ & Ranking function\\ \hline $f^0(\cdot)$ & Initial preferences\\ \hline $f^t(\cdot)$ & Preferences at time $t$\\ \hline \end{tabular} \end{table*} \subsection{Normalized rank-scoring rules} We consider that each potential customer ranks the set of marketing campaigns $\mathcal{K}$ in order of their offers to her. We assume a normalized rank-scoring rule characterized by an ordered sequence of $K$ numbers, which we denote by $s_1,s_2,\ldots,s_K$, where \mbox{$s_1\ge s_2\ge\ldots\ge s_K=0$} and such that $\sum_{k=1}^K s_k=1$. We consider that each potential customer~$n\in\mathcal{V}$ distributes her value~$v_n$ across marketing campaigns according to this normalized rank-scoring rule \mbox{$\mathbf{s}=(s_1,s_2,\ldots,s_K)$} as follows: \mbox{$v_n\mathbf{s}=(v_n s_1, v_n s_2, \ldots, v_n s_K)$}. Thus, potential customer \mbox{$n\in\mathcal{V}$} gives the top-ranked marketing campaign $v_n s_1$ points, the second-ranked marketing campaign $v_n s_2$, and so on, with the $k$th ranked marketing campaign getting~$v_n s_k$ for all $k\in\mathcal{K}$. Therefore, the payoff distributed is indeed $\sum_{k=1}^K v_n s_k= v_n\sum_{k=1}^K s_k=v_n$ where the last equality is coming from the normalization of the rank-scores. Each marketing campaign's payoff corresponds to the sum of the payoffs across all potential customers. The previous assumption is not restrictive. Given any rank-scoring rule, where $s_1,s_2,\ldots,s_K,$ are not all equal and without loss of generality $s_1\ge s_2\ge\ldots\ge s_K$, it can be normalized to fulfill the previous statement. Indeed, let $S=\sum_{j=1}^K (s_j-s_K)$. We observe that we can normalize the rank-scoring rule as follows $(s'_1,s'_2,\ldots,s'_K)=(\frac{s_1-s_K}{S},\frac{s_2-s_K}{S},\ldots,\frac{s_K-s_K}{S})$ so that $s'_K=0$ and the sum of the rank-scores $S'$ is equal to $1$. \subsection{Intrinsic payoff function} We assume that the intrinsic value of potential customer $n\in\mathcal{V}$ is given by $w_n\le U$ with $U$ finite and we denote by~${\bf w}=(w_1,w_2,\ldots,w_N)$ the vector of intrinsic values of potential customers. We consider the matrix of offers of marketing campaigns to potential customers, denoted by ${\bf X}=(x_{k,n})$, where $x_{k,n}$ corresponds to the offer of marketing campaign~$k\in\mathcal{K}$ to potential customer~$n\in\mathcal{V}$. We denote by ${\bf x}_{k,\scalerel*{\cdot}{\bigodot}}=(x_{k,1},x_{k,2},\ldots,x_{k,N})$ the vector of offers of marketing campaign~$k\in\mathcal{K}$. We consider the matrix of offers to potential customers but only of the competing marketing campaigns to~$k$, denoted by ${\bf X}_{{-k},\scalerel*{\cdot}{\bigodot}}$. For potential customer~$n\in\mathcal{V}$, we consider a ranking function \mbox{$u_n:\mathcal{K}\rightarrow\{1,2,\ldots,K\}$} which maps a given marketing campaign~$k$ to its ranking given by that potential customer. For example, if marketing campaign~$k$ is the top-ranked marketing campaign and $k'$ is the third-ranked marketing campaign for potential customer~$n\in\mathcal{V}$ then~$u_n(k)=1$ and $u_n(k')=3$. The intrinsic payoff function for marketing campaign~$k$ is given by \begin{equation}\label{eq:intrinsicpayoff} \pi_k^{\mathrm{INT}}({\bf x}_{k,\scalerel*{\cdot}{\bigodot}},{\bf X}_{{-k},\scalerel*{\cdot}{\bigodot}},{\bf w}) =\sum_{n=1}^N w_n s_{u_n(k)}, \end{equation} where $s_{u_n(k)}$ corresponds to the rank-score given by potential customer~$n$ for the ranking of marketing campaign~$k$. We observe that $s_{u_n(k)}$ depends only on the offers made to potential customer~$n$. \subsection{Evolution of the system} We consider that time is slotted and without loss of generality we consider that the initial time~\mbox{$t_0=0$}. We consider the function \mbox{$f^0:\mathcal{V}\rightarrow\mathcal{K}^K$} which maps a potential customer \mbox{$n\in\mathcal{V}$} to her initial preferences \mbox{$f^0(n)=(f^0_1(n),f^0_2(n),\ldots,f^0_K(n))$} where $f^0_1(n)$ corresponds to her initial top-ranked marketing campaign, $f^0_2(n)$ corresponds to her initial second-ranked marketing campaign, and so on. Similarly, for $t\ge1$ we consider function \mbox{$f^t:\mathcal{V}\rightarrow\mathcal{K}^K$} which maps a potential customer \mbox{$n\in\mathcal{V}$} to her preferences at time~$t$, denoted by \mbox{$f^t(n)=(f^t_1(n),f^t_2(n),\ldots,f^t_K(n))$}, where $f^t_1(n)$ corresponds to her top-ranked marketing campaign at time $t$, $f^t_2(n)$ corresponds to her second-ranked marketing campaign at time $t$, and so on. The evolution of the system will be described by the voter model. Starting from any arbitrary initial preference assignment by the potential customers of $\mathcal{G}$, at each time $t\ge 1$, each potential customer picks uniformly at random one of his neighbors and adopts his opinion. Equivalently, $f^t(j)=f^{t-1}(j')$ with probability $1/\lvert\mathcal{N}(j)\rvert$ if \mbox{$j'\in\mathcal{N}(j)$}. Similarly to the previous subsection, for $t\ge0$ and potential customer~$n\in\mathcal{V}$, we consider function \mbox{$u_n^t:\mathcal{K}\rightarrow\{1,2,\ldots,K\}$} which for a given marketing campaign~$k\in\mathcal{K}$ gives you the ranking of the marketing campaign for potential customer~\mbox{$n$} at time $t$. We are interested on the network value of a potential customer. Following the steps of~\cite{masucciS2014}, in the next section we compute this value. \section{RESULTS}\label{sec:results} \subsection{Network value of a customer} We notice that in the voter model described in the previous section, the probability that potential customer $j$ adopts the opinion of one her neighbors $j'$ is precisely $1/\lvert \mathcal{N}(j)\rvert$. Equivalently, this is the probability that a random walk of length $1$ that starts at $j$ ends up in~$j'$. Generalizing this observation by induction on~$t$, we obtain the following proposition. \begin{proposition}[Even-Dar and Shapira~\cite{EvenDar2007}] Let $p_{j,j'}^t$ denote the probability that a random walk of length $t$ starting at potential customer $j$ stops at potential customer $j'$. Then the probability that after $t$ iterations of the voter model, potential customer $j$ will adopt the opinion that potential customer $j'$ had at time $t=0$ is precisely $p_{j,j'}^t$. \end{proposition} By linearity of expectation, the expected network payoff for marketing campaign~$k\in\mathcal{K}$ at target time $\tau$, denoted by $\pi^\tau_k$, is given by \begin{equation*} \pi^\tau_k=\sum_{j\in\mathcal{V}}\sum_{j'\in\mathcal{V}} w_j p^\tau_{j,j'} s_{u_{j'}^\tau(k)}. \end{equation*} Let $M$ be the normalized transition matrix of $\mathcal{G}$, i.e., $M(j,j')=1/\lvert\mathcal{N}(j)\rvert$ if $j'\in\mathcal{N}(j)$ and zero otherwise. The probability that a random walk of length~$\tau$ starting at~$j$ ends in $j'$ is given by the $(j,j')$-entry of the matrix~$M^\tau$. Then \begin{equation*} \pi^\tau_k=\sum_{j\in\mathcal{V}}\sum_{j'\in\mathcal{V}} w_j M^\tau(j,j') s_{u_{j'}^\tau(k)}. \end{equation*} Therefore, the expected network payoff is given by \begin{equation}\label{eq:networkpayoff} \pi^\tau_k=\sum_{j'\in\mathcal{V}} v_{j'} s_{u_{j'}^\tau(k)}, \end{equation} where the network value of potential customer $j'$ at target time~$\tau$ is given by \begin{equation*} v_{j'}=\sum_{j\in\mathcal{V}} w_{j}M^\tau(j,j'). \end{equation*} We can formalize this in the following statement. \begin{theorem}\label{theo:doumbodo} Under the rank-scoring rule with normalized ranking points $(s_1,s_2,\ldots,s_K)$ and intrinsic values $(w_1,w_2,\ldots,w_N)$, the network value of potential customer $j'$ at target time~$\tau$ is given by \begin{equation*} v_{j'}=\sum_{j\in\mathcal{V}} w_{j}M^\tau(j,j'), \end{equation*} where $M$ is the normalized transition matrix of~$\mathcal{G}$. \end{theorem} We notice that both eqns.~\eqref{eq:intrinsicpayoff} and~\eqref{eq:networkpayoff} are similar. The only difference is that one considers the intrinsic value and the other the network value (given by Theorem~\ref{theo:doumbodo}) of potential customers. From eqns.~\eqref{eq:intrinsicpayoff} and~\eqref{eq:networkpayoff}, we obtain that after determining the network value of potential customers, the problem of determining the resource allocation that maximizes the expected network payoff is similar to the problem of determining the resource allocation that maximizes the expected intrinsic payoff. Therefore, in the following we restrict ourselves to this problem. \subsection{Non-simultaneous allocations} In this subsection, we prove that the intrinsic payoff problem is easy to solve in the case where one marketing campaign can observe what competing marketing campaigns are offering and after that makes offers to potential customers. Indeed, even in the case of two marketing campaigns, if marketing campaign~$2$ could make offers after observing the offers made by marketing campaign~$1$, then marketing campaign~$2$ will always be preferred by the most valuable potential customers. For example, marketing campaign~$2$ could identify a small group of potential customers who are the least valuable between those who are promised strictly positive offers by marketing campaign~$1$ (e.g. the $5\%$ of the distribution of marketing campaign~$1$), and offer nothing to this group. Then campaign~$2$ could offer to every other potential customer slightly more than campaign~$1$ has promised him, where the excess over campaign~$1$'s offers is financed from the resources not given to the potential customers in the first group. Every potential customer outside of the first small group ($5\%$) would prefer marketing campaign~$2$, who would win $95\%$ of the most valuable potential customers. To avoid this simple outcome, we assume that both of the marketing campaigns must make their marketing campaign promises simultaneously. (We may think of scenarios in which it is important to make the first offers and in which there is a cost of delay by the response to the first offers, but those scenarios are outside the scope of this work.) \subsection{Family of scalable probability distributions} We seek a solution that can be written as a family of offer probability distributions with a scaling parameter. We want that offers scale with the value (intrinsic or network value depending on the context) of the potential customers. However, essentially we look for an offer distribution that has the same shape relative to this value. We consider that the representative offer distribution~$F$ (the offer distribution with scale value~$1$) has a probability density function~$f$ in a bounded support~$I$. From the fundamental theorem of calculus, we have the following. Let $I\subseteq\mathbb{R}$ be an interval and $\varphi:[a_1,b_1]\to I$ be a continuously differentiable function. Suppose that $f:I\to\mathbb{R}$ is a continuous function. Then \begin{equation*} \int_{\varphi(a_1)}^{\varphi(b_1)} f(x)\,dx = \int_{a_1}^{b_1}f(\varphi(t))\varphi'(t)\, dt. \end{equation*} For potential customer~$n\in\mathcal{V}$, we use function \mbox{$\varphi(x)=x/v_n$} which is continuosly differentiable and scales the offers by a factor~$v_n$ and therefore if the probability density function of the representative offer density~$f(x)$ has support $[a,b]$ the scaled offer density is given by $f(x/v_n)/v_n$ and it has support $[v_na,v_nb]$. We may represent marketing campaign $k$'s cumulative offer distribution by a family of probability distributions, with representative cumulative offer distribution~$F^k(x;a,b)$, where $F^k_n(x)=F^k(x/v_n;v_na,v_nb)$ denotes the fraction of potential customers to whom marketing campaign $k$ will offer less than value $x$. Each offer distribution for potential customer $n\in\mathcal{V}$ must have mean~$Bv_n/V$ and so $F_n^k$ must be a non-decreasing function that satisfies \begin{equation*} \int_0^\infty x\,dF_n^k(x)=Bv_n/V, \end{equation*} as well as $F_n^k(x)=0\quad\forall x\le0$, and \begin{equation*} \lim_{x\to+\infty}F_n^k(x)=1. \end{equation*} \subsection{Symmetric equilibrium} A symmetric equilibrium of the marketing campaign competition is a scenario in which every marketing campaign is expected to use the same offer distribution, and each marketing campaign finds that using this offer distribution maximizes its chances of winning when the other marketing campaigns are also simultaneously and independently allocating their offers according to this distribution (and all potential customers perceive that the $K$ marketing campaigns have the same probability of winning the market). In this work, we focus exclusively on finding such symmetric equilibria. In the following, we prove that there is a symmetric equilibrium which corresponds to a family of probability distributions with scale parameter $v_n$ for potential customer $n\in\mathcal{V}$. Let $F(x)=F(x;a,b)$ denote the representative cumulative distribution function acting as the equilibrium strategy and let $F_n(x):=F(x/v_n;v_na,v_nb)$ denote the cumulative distribution function representing the equilibrium offer distribution for potential customer~$n\in\mathcal{V}$. $F_n(x)$ denotes the cumulative probability that a given potential customer~$n$ will be offered less than~$x$ by any other given marketing campaign, according to this equilibrium distribution. Consider the situation faced by a given marketing campaign~$k$ when it chooses its offer distribution, assuming that every other marketing campaign will use the equilibrium offer distribution. When marketing campaign~$k$ offers $x$ to potential customer~$n$, the probability that this marketing campaign $k$ will be ranked in position $j$ by potential customer~$n$ is given by $P(j,F_n(x))$ where we let \begin{equation*} P(j,q)={K-1\choose j-1} q^{K-j}(1-q)^{j-1}. \end{equation*} That is, $P(j,q)$ denotes the probability that exactly $j-1$ of the $K-1$ competing marketing campaigns will offer more than $x$, given that each other marketing campaign has an independent probability $q$ of offering less than $x$ to this potential customer. Equivalently, $P(j,q)$ denotes the probability that exactly $K-j$ of the $K-1$ competing marketing campaigns will offer less than $x$. If marketing campaign $k$ offers $x$ to potential customer~$n$, then the expected value that this potential customer will give to this marketing campaign is $R_n(F_n(x))$ where \begin{equation*} R_n(q)=v_n\sum_{j=1}^K P(j,q)s_j. \end{equation*} Things could be more difficult if there were a positive probability of other marketing campaigns offering exactly~$x$, but we can ignore such complications because we will prove (see Lemma~\ref{lemma:sudan}) that the equilibrium distribution cannot assign positive probability to any single point. When all marketing campaigns independently use the same offer distribution, they must all get the same expected score from potential customer~$n$ which must equal~$v_n/K$. \begin{theorem}\label{theo:tembine} In a $K$-marketing campaign competition under the normalized rank-scoring rule $(s_1,s_2,\ldots,s_K)$ and values $(v_1,v_2,\ldots,v_N)$, there is a unique scalable symmetric equilibrium of the marketing campaigns' offer-distribution game. In this equilibrium, each marketing campaign chooses to generate offers according to a family of probability distributions, with scale parameter $v_n$ for potential customer $n$, that has support on the interval from $0$ to $s_1KBv_n/V$, and which has a cumulative distribution $F(\cdot)$ that satisfies the equation \begin{equation*} x=R_n(F_n(x))/(V/KB),\quad\forall x\in[0,s_1KBv_n/V]. \end{equation*} \end{theorem} The proof follows the steps of Theorem~$2$ in~\cite{myerson1993}. The following is a constructive proof and we decompose the proof in the next following lemmas. \begin{lemma}\label{lemma:sudan} If there is a symmetric equilibrium distribution of offers, it must be continuous, i.e. it cannot have any points of positive probability. \end{lemma} \begin{proof} If all marketing campaigns used a representative offer distribution $F(\cdot)$ that assigned a positive probability $\delta$ to some point $x>0$, then there would be a positive fraction $\delta^K$ of potential customers who would be exactly indifferent among the marketing campaigns since they receive from each of them the same offer. Any marketing campaign could then increase his average point score among this group by giving an arbitrarily small increase (say, $\varepsilon$) to most of the potential customers to whom he was going to offer $x$ and the cost of this increase could be financed by moving an arbitrarily small fraction of this group down to zero. In other words, if the offer distribution had a positive mass at some point, then a marketing campaign could gain a positive group of potential customers by a transfer of resources that would lower his score from only an arbitrarily small number of potential customers. \end{proof} \begin{lemma} We have that \begin{equation*} R_n(0)=s_Kv_n=0,\quad R_n(1)=s_1v_n, \end{equation*} and $R_n(\cdot)$ is a continuous and strictly increasing function over the interval from $0$ to $1$. \end{lemma} \begin{proof} These equations hold because $P(j,0)$ equals $0$ unless $j$ equals $K$, $P(j,1)$ equals $0$ unless $j$ equals $1$, and $P(K,0)=1=P(1,1)$. Continuity of $R_n(\cdot)$ follows directly from the formulas, because $R_n(q)$ is polynomial in~$q$. Let us show that $R_n(\cdot)$ is increasing. First, we verify that \begin{equation*} R_n(q)=v_n\sum_{j=2}^K (s_{j-1}-s_j)\sum_{m<j} P(m,q), \end{equation*} using $s_K=0$. We observe that $\sum_{m<j}P(m,q)$ denotes the probability that more than $K-j$ other marketing campaigns have made offers in an interval of probability $q$, and this probability must be a strictly increasing function of~$q$. The ordering of the $s_j$ values guarantees that at least one term in this $R_n(q)$ expression must have a positive $(s_{j-1}-s_j)$ coefficient, and none can be negative. Therefore $R_n(\cdot)$ is an increasing function. \end{proof} \begin{lemma} The lowest permissible offer~$0$ must be in the support of the equilibrium distribution of offers. \end{lemma} \begin{proof} The main idea is that, if the minimum of the support were strictly greater than zero, then a marketing campaign would be devoting positive resources to potential customers near the minimum of the support of the distribution. He would expect to get almost no value ($s_K=0$) from these potential customers, because all other marketing campaigns would almost surely be promising them more. Thus, it would be better to reduce the offers to $0$ for most of these potential customers in order to make serious offers for at least some of them. The above argument can be formalized as follows. Because, as we have shown before, there are no points of positive probability, the cumulative offer distribution $F_n(\cdot)$ for potential customer~$n$ is continuous. Let $z$ denote the minimum of the support of the equilibrium offer distribution for potential customer~$n$, so $F_n(z)=0$ but $F_n(z+\varepsilon)>0$ for all positive~$\varepsilon$. Now, select any fixed $y$ such that $y>z$ and $F_n(y)>0$. For any $\varepsilon$ such that $0<\varepsilon<y-z$, a marketing campaign might consider deviating from the equilibrium by promising either $y$ or $0$ to each potential customer~$n$ in the group of potential customers whom he was supposed to offer between $z$ and $(z+\varepsilon)$, according to his $F_n$-distributed random-offer generator. The potential customers in this group were going to be given offers that averaged some amount between $z$ and $(z+\varepsilon)$, so he can offer $y$ dollars to at least a $z/y$ fraction of these potential customers without changing his offers to any other potential customer. Among this $z/y$ fraction of the group, he would get an average point score of $R_n(F_n(y))$, by outbidding the other marketing campaigns who are using the $F_n$ distribution; so the deviation would get him an average point score of at least $(z/y)R_n(F_n(y))$ from this group of potential customers (the potential customers moved down to zero in this deviation would give him $s_Kv_n=0$ points). If he follows the equilibrium, however, he gets at most $R_n(F_n(z+\varepsilon))$ as his average point score from this group of potential customers. So to deter such a deviation, we must have $(z/y)R_n(F_n(y))\le R_n(F_n(z+\varepsilon))$, and so \begin{equation*} z\le y\frac{R_n(F_n(z+\varepsilon))}{R_n(F_n(y))}. \end{equation*} But $R_n(F_n(z+\varepsilon))$ goes to $R_n(F_n(z))=R_n(0)=0$ as $\varepsilon$ goes to $0$, and so $z$ must equal $0$. \end{proof} \begin{lemma} There is some positive constant $\alpha$ such that \begin{equation*} R_n(F_n(x))=\alpha x. \end{equation*} \end{lemma} \begin{proof} Let $x$ and $y$ be any two numbers in the support of the equilibrium distribution for potential customer~$n$ such that \mbox{$0<x<y$}. A marketing campaign could deviate by taking a group of potential customers to whom he is supposed to give offers close to $x$, according to his equilibrium plan, and instead he could give them offers close to $y$ to an $x/y$ fraction of this group and he could offer $0$ to the remaining $(1-x/y)$ fraction. Because the support of the representative distribution contains $0$ as well as $x$ and $y$, neither this self-financing deviation nor its reverse (offering close to $x$ to a group of potential customers of whom an $x/y$ fraction were supported get close to $y$, and the remaining $(1-x/y)$ fraction were supposed to get close to~$0$) should increase the marketing campaign's expected average point score from this group of potential customers. Thus, we must have \begin{equation*} R_n(F_n(x))=(x/y)R_n(F_n(y))+(1-x/y)R_n(F_n(0)). \end{equation*} But $R_n(F_n(0))=R_n(0)=0$, so we obtain \[ \frac{R_n(F_n(x))}{x}=\frac{R_n(F_n(y))}{y}, \] for all $x$ and $y$ in the support of the equilibrium offer distribution for potential customer~$n$. So there is some positive constant $\alpha$ such that, for all $x$ in the support of the offer distribution for potential customer~$n$, $R_n(F_n(x))=\alpha x$. \end{proof} \begin{lemma} We have that the constant $\alpha=V/KB$. \end{lemma} \begin{proof} The mean offer must equal $Bv_n/V$ under the $F_n$ distribution, therefore \begin{equation*} \int_0^{s_1v_n/\alpha} x\,dF_n(x)=B\frac{v_n}{V}. \end{equation*} We also know that a marketing campaign who uses the same offer distribution $F_n$ as all the other marketing campaigns must expect the average point score $v_n/K$, so \begin{align*} \frac{v_n} K&=\int_0^{s_1v_n/\alpha} R_n(F_n(x))\,dF_n(x)=\int_0^{s_1v_n/\alpha} \alpha x\, dF_n(x)\\&=\alpha B\frac{v_n}{V}. \end{align*} \end{proof} From the previous lemma, the support of the $F_n$ distribution is the interval from~$0$ to $s_1v_n/\alpha=s_1KBv_n/V$, and the cumulative distribution satisfies the formula \begin{equation*} R_n(F_n(x))=\frac{V}{KB}x,\quad\forall x\in[0,s_1KBv_n/V]. \end{equation*} \begin{lemma}\label{lemma:doumbodo} $F_n$ is an equilibrium. \end{lemma} \begin{proof} In general, for any nonnegative~$x$, we have $R_n(F_n(x))\le\frac{V}{KB}x$, because when $x>s_1KBv_n/V$ $R_n(F_n(x))=R_n(1)=s_1v_n<\frac{V}{KB}x$. So using any other distribution $G_n$, that has mean $Bv_n/V$ for potential customer~$n\in\mathcal{V}$ and is on the nonnegative numbers, would give to a marketing campaign an expected score \begin{align*} \int_0^\infty &R_n(F_n(x))\,dG_n(x)\le\int_0^\infty\frac{V}{KB} x\,dG_n(x)\\ &=\frac V {KB}\int_0^\infty x\,dG_n(x)=\frac{v_n}{K}, \end{align*} with equality if the support of $G_n$ is contained in the interval $[0,s_1KBv_n/V]$. Thus, no marketing campaign can increase his expected score by deviating from $F_n$ to some other distribution, when all other marketing campaigns are using the distribution~$F_n$. \end{proof} Lemmas~\ref{lemma:sudan}-\ref{lemma:doumbodo} are the proof of Theorem~\ref{theo:tembine}. The previous theorem provides us a method to obtain explicitly the cumulative offer distribution functions under different ranking-scores. \section{SIMULATIONS}\label{sec:simulations} \subsection*{Winner-takes-all} We notice that our problem is more general than a simple pairwise competition between marketing campaigns. For the pairwise competition there already exists a solution (see e.g.~\cite{SchwartzLS2014}). However, a pairwise competition is not always what is needed. For example, consider the case when each customer chooses only one marketing campaign to buy a product (it could be for example buying a house, in which most of the potential customers will buy only one house). To see this, consider the example of three competing marketing campaigns $X$, $Y$, and $Z$ and five equally valuable customers (for the sake of simplification). Consider the pure strategies \begin{align*} {\bf x}&=(0.2,0.2,0.2,0.2,0.2),\\ {\bf y}&=(0.0,0.0,0.0,0.5,0.5),\\ {\bf z}&=(0.5,0.5,0.0,0.0,0.0). \end{align*} In that case, the pairwise competition gives that marketing campaign $X$ captures $3$ out of $5$ potential customers to $Y$ (the first three), and that $X$ captures $3$ out of $5$ potential customers to $Z$ (the last three), thus winning in a pairwise competition against both marketing competitors. However, since each customer will only choose one product, the final outcome will be $2$ customers for $Y$, $2$ customers for $Z$, while only $1$ customer for $X$. The case where the objective is to be the first evaluated marketing campaign and being second does not provide any value can be represented as follows: \begin{equation*} s_1=1,\quad s_2=0,\quad\ldots,\quad s_K=0. \end{equation*} In that case $R_n(q)=v_n P(1,q)=v_n q^{K-1}$. Therefore from Theorem~\ref{theo:tembine} the equilibrium cumulative distribution satisfies \begin{equation*} x=(F(x))^{K-1}KBv_n/V,\quad\forall x\in[0,KBv_n/V], \end{equation*} and thus \begin{equation*} F(x)=\left(\frac{x}{KBv_n/V}\right)^{1/(K-1)},\quad\forall x\in[0,KBv_n/V]. \end{equation*} When $K=2$ we recover the result of~\cite{SchwartzLS2014} for pairwise competition. It is also interesting to notice the similarity between this solution and the characterization of the solution for an all-pay auction with one object~\cite{BayeKV1996}. We notice that there is a tight relationship between this scenario, Colonel Blotto games and auctions. A Colonel Blotto game can be seen as a simultaneous all-pay auction of multiple items of complete information. An all-pay auction is an auction in which every bidder must forfeit its bid regardless of whether it wins the object which is awarded to the highest bidder. It is an auction of complete information since the value of the object is known to every bidder. In other contexts, this was already noted by Szentes and Rosenthal~\cite{SzentesR2003}, Roberson~\cite{Roberson2006} and Kvasov~\cite{Kvasov2007}. \begin{figure} \caption{Winner-takes-all equilibrium offer distribution when we consider a budget of $1000$ dollars, $K=2$ and~\mbox{$v_n/V=1/20$}; $K=4$ and \mbox{$v_n/V=1/40$}; and $K=6$ and \mbox{$v_n/V=1/60$} (for them to have the same support).} \end{figure} \begin{figure} \caption{Borda equilibrium offer distribution when we consider a budget of $1000$ dollars, $K=2$, $K=4$ and $K=6$ and relative value~\mbox{$v_n/V=1/20$}. We notice that the equilibrium offer distribution of Borda is independent of the number of competing marketing campaigns.} \label{fig:pato1} \end{figure} Figure~\ref{fig:pato1}(a) gives us the equilibrium offer distribution when we consider that the budget of each marketing campaign is $1000$~dollars for three different competing scenarios: \begin{itemize} \item there are $2$ marketing campaigns and the relative value of a customer is $v_n/V=1/20$; \item there are $4$ marketing campaigns and the relative value of a customer is $v_n/V=1/40$; \item there are $6$ marketing campaigns and the relative value of a customer is $v_n/V=1/60$. \end{itemize} The chosen parameters in the three scenarios allow us to consider the same support of the offers distributions. We observe that when there are two competing marketing campaigns, the equilibrium offers are made uniformly at random over the support interval from $0$ to $100$ dollars. However, increasing the number of competing marketing campaigns, we observe that marketing campaigns offers are skewed offering less than the average to most of the potential customers while offering much more than the average for a reduced number of potential customers. In particular, for four marketing campaigns, more than $50\%$ of the potential customers receive offers of less than $14$ dollars (the average offer is $25$ dollars). This effect is even more pronounced for six marketing campaigns where more than $50\%$ of potential customers receive offers of less than $4$ dollars (the average offer is $17$ dollars). \subsection*{Borda} Another interesting case is when the rank-scoring rule is linearly decreasing with the ranking (we denote it Borda for its similarity to Borda ranking votes). For example, it can be given by \begin{equation*} s_1=\frac{(K-1)}S,s_2=\frac{(K-2)}S,s_3=\frac{(K-3)}S,\ldots, s_K=0, \end{equation*} where $S=\sum_{j=1}^K s_j=K(K-1)/2$. The function~$R_n(q)$ under that rule is given by \begin{align*} R_n(q)&=v_n\sum_{j=1}^K P(j,q)\frac{2(K-j)}{K(K-1)}\\ &=\frac{2v_n}K\sum_{j=1}^K\binom{K-1}{j-1}q^{K-j}(1-q)^{j-1}\frac{K-j}{K-1}\\ &=\frac{2v_n}K\sum_{j=0}^{K'}\binom{K'}{j}q^{K'-j}(1-q)^j\left(1-\frac{j}{K'}\right)\\ &=\frac{2v_n}K\left(1-\frac{K'(1-q)}{K'}\right)=\frac{2v_n}Kq \end{align*} where we have made the change of variable $K'=K-1$ and use the formula of the expected value of a binomial distribution. Thus, by the previous theorem \begin{align*} \frac{V}{KB}x &=\frac{2v_n}{K} F(x), \quad\forall x\in[0,2Bv_n/V]. \end{align*} Therefore, \begin{equation*} F(x)=\frac{x}{2Bv_n/V}\quad\forall x\in[0,2Bv_n/V]. \end{equation*} Therefore, the equilibrium offer distribution under this rule is a uniform distribution over the interval from $0$ to $2Bv_n/V$. We notice that the equilibrium offer distribution is independent of the number of competing marketing campaigns~$K$. Figure~\ref{fig:pato1}(b) gives us the equilibrium offer distribution when we consider that the budget for each marketing campaign is $1000$ dollars, the relative value of a customer is $v_n/V=1/20$ and we consider three scenarios with $K=2$, $K=4$, and $K=6$. We observe that in these three scenarios the equilibrium offer distribution is uniformly distributed over the suppport interval from $0$ to $100$ dollars and it is {\sl independent on the number of competing marketing campaigns}. The previously considered scenarios, Winner-takes-all and Borda, are two out of many possible scenarios that can be analyzed and to which our previous results can be applied. \section{Conclusions}\label{sec:conclusions} In this work, we studied advertising competitions in social networks. In particular, we analyzed the scenario of several marketing campaigns determining to which potential customers to market and how many resources to allocate to these potential customers while taking into account that competing marketing campaigns are trying to do the same. As a consequence of social network dynamics, the importance of every potential customer in the market can be expressed in terms of her network value which is a measure of the influence exerted among her peers and friends and of which we provided an analytical expression for the voter model of social networks. Defining rank-scoring rules for potential customers and using tools from game theory, we have given a closed form expression of the symmetric equilibrium offer strategy for the marketing campaigns from which no campaign has any interesting to deviate. Moreover, we presented some interesting out of many possible scenarios to which our results can be applied. \end{document}
arXiv
Abstract: Symmetry of information states that $C(x) + C(y|x) = C(x,y) + O(\log C(x))$. We show that a similar relation for online Kolmogorov complexity does not hold. Let the even (online Kolmogorov) complexity of an n-bitstring $x_1x_2... x_n$ be the length of a shortest program that computes $x_2$ on input $x_1$, computes $x_4$ on input $x_1x_2x_3$, etc; and similar for odd complexity. We show that for all n there exist an n-bit x such that both odd and even complexity are almost as large as the Kolmogorov complexity of the whole string. Moreover, flipping odd and even bits to obtain a sequence $x_2x_1x_4x_3\ldots$, decreases the sum of odd and even complexity to $C(x)$.
CommonCrawl
Roof Quote Delivery, Return & Refund Policy Bay Window Products Columns & Pillars GRP Chimneys GRP Dormers Stone Range Fibreglass Roofing Kit Composite Door Builder sum of poisson distribution By: |Published on: Dec 4, 2020|Categories: Uncategorized| 0 comments 3 A sum property of Poisson random vari-ables Here we will show that if Y and Z are independent Poisson random variables with parameters λ1 and λ2, respectively, then Y+Z has a Poisson distribution with parameter λ1 +λ2. Download English-US transcript (PDF) In this segment, we consider the sum of independent Poisson random variables, and we establish a remarkable fact, namely that the sum is also Poisson.. Poisson Probability distribution Examples and Questions. E[X i] = X i λ = nλ. The Poisson distribution is named after Simeon-Denis Poisson (1781–1840). $\begingroup$ This works only if you have a theorem that says a distribution with the same moment-generating function as a Poisson distribution has a Poisson distribution. The Poisson distribution equation is very useful in finding out a number of events with a given time frame and known rate. In probability theory, a compound Poisson distribution is the probability distribution of the sum of a number of independent identically-distributed random variables, where the number of terms to be added is itself a Poisson-distributed variable.In the simplest cases, the result can be either a continuous or a discrete distribution. The random variable \( X \) associated with a Poisson process is discrete and therefore the Poisson distribution is discrete. We assume to observe inependent draws from a Poisson distribution. P = Poisson probability. Then, if the mean number of events per interval is The probability of observing xevents in a given interval is given by The PMF of the sum of independent random variables is the convolution of their PMFs.. I will keep calling it L from now on, though. Before we even begin showing this, let us recall what it means for two function of the Poisson distribution is given by: [L^x]*[e^(-L)] p(X = x) = -----x! (2.2) Let σ denote the variance of X (the Poisson distribution … So Z= X+Y is Poisson, and we just sum the parameters. The Poisson Distribution 4.1 The Fish Distribution? Here is an example where \(\mu = 3.74\) . The programming on this page will find the Poisson distribution that most closely fits an observed frequency distribution, as determined by the method of least squares (i.e., the smallest possible sum of squared distances between the observed frequencies and the Poisson expected frequencies). Home » Moments, Poisson Distributions » First four moments of the Poisson distribution First four moments of the Poisson distribution Manoj Sunday, 27 August 2017 Situations where Poisson Distribution model does not work: Then (X 1 + X 2) is Poisson, and then we can add on X 3 and still have a Poisson random variable. To make your own odds, first calculate or estimate the likelihood of an event, then use the following formula: Odds = 1/ (probability). This has a huge application in many practical scenarios like determining the number of calls received per minute at a call centre or the number of unbaked cookies in a batch at a bakery, and much more. Say X 1, X 2, X 3 are independent Poissons? Then the moment generating function of X 1 + X 2 is as follows: Simulate 100,000 draws from the Poisson(1) distribution, saving them as X.; Simulate 100,000 draws separately from the Poisson(2) distribution, and save them as Y.; Add X and Y together to create a variable Z.; We expect Z to follow a Poisson(3) distribution. $\begingroup$ It's relatively easy to see that the Poisson-sum-of-normals must have bigger variance than this by pondering the situation where $\sigma=0$. Using Poisson distribution, the probability of winning a football match is the sum of the probabilities of each individual possible winning score. distribution. Below are some of the uses of the formula: In the call center industry, to find out the probability of calls, which will take more than usual time and based on that finding out the average waiting time for customers. The Poisson distribution is related to the exponential distribution.Suppose an event can occur several times within a given unit of time. Solution parameter. But it's neat to know that it really is just the binomial distribution and the binomial distribution really did come from kind of the common sense of flipping coins. The probability of a certain event is constant in an interval based on space or time. Poisson probability distribution is used in situations where events occur randomly and independently a number of times on average during an interval of time or space. When the total number of occurrences of the event is unknown, we can think of it as a random variable. The zero truncated Poisson distribution, or Positive Poisson distribution, has a probability density function given by: which can be seen to be the same as the non-truncated Poisson with an adjustment factor of 1/(1-e-m) to ensure that the missing class x =0 is allowed for such that the sum … The Poisson distribution possesses the reproductive property that the sum of independent Poisson random variables is also a Poisson random variable. The Poisson distribution was discovered by a French Mathematician-cum- Physicist, Simeon Denis Poisson in 1837. by Marco Taboga, PhD. Poisson distribution can work if the data set is a discrete distribution, each and every occurrence is independent of the other occurrences happened, describes discrete events over an interval, events in each interval can range from zero to infinity and mean a number of occurrences must be constant throughout the process. Thus independent sum of Poisson distributions is a Poisson distribution with parameter being the sum of the individual Poisson parameters. Then \( V = \sum_{i=1}^N U_i \) has a compound Poisson distribution. In addition, poisson is French for fish. In more formal terms, we observe the first terms of an IID sequence of Poisson random variables. The count of events that will occur during the interval k being usually interval of time, a distance, volume or area. So X 1 + X 2 + X 3 is a Poisson random variable. The Poisson distribution became useful as it models events, particularly uncommon events. As you point out, the sum of independent Poisson distributions is again a Poisson distribution, with parameter equal to the sum of the parameters of the original distributions. This is a fact that we can establish by using the convolution formula.. And this is really interesting because a lot of times people give you the formula for the Poisson distribution and you can kind of just plug in the numbers and use it. \) The following is the plot of the Poisson cumulative distribution function with the same values of λ as the pdf plots above. Poisson distribution. The total number of successes, which can be between 0 and N, is a binomial random variable. The Poisson distribution is implemented in the Wolfram Language as PoissonDistribution[mu]. 2) CP for P(x ≤ x given) represents the sum of probabilities for all cases from x = 0 to x given. The Poisson distribution The Poisson distribution is a discrete probability distribution for the counts of events that occur randomly in a given interval of time (or space). Given the mean number of successes (μ) that occur in a specified region, we can compute the Poisson probability based on the following formula: For the binomial distribution, you carry out N independent and identical Bernoulli trials. As expected, the Poisson distribution is normalized so that the sum of probabilities equals 1, since (9) The ratio of probabilities is given by (10) The Poisson distribution reaches a maximum when Works in general. To understand the parameter \(\mu\) of the Poisson distribution, a first step is to notice that mode of the distribution is just around \(\mu\). The probability generating function of the sum is the generating function of a Poisson distribution. Use the compare_histograms function to compare Z to 100,000 draws from a Poisson(3) distribution. In any event, the results on the mean and variance above and the generating function above hold with \( r t \) replaced by \( \lambda \). How do you make your own odds? Since the sum of probabilities adds up to 1, this is a true probability distribution. The classical example of the Poisson distribution is the number of Prussian soldiers accidentally killed by horse-kick, due to being the first example of the Poisson distribution's application to a real-world large data set. ; The average rate at which events occur is constant; The occurrence of one event does not affect the other events. Assumptions. Each trial has a probability, p, of success. To see this, suppose that X 1 and X 2 are independent Poisson random variables having respective means λ 1 and λ 2. In this chapter we will study a family of probability distributionsfor a countably infinite sample space, each member of which is called a Poisson Distribution. Properties of the Poisson distribution. The probability distribution of a Poisson random variable is called a Poisson distribution.. Show that the Poisson distribution sums to 1. The distribution We go So in calculateCumulatedProbability you need to create a new PoissonDistribution object with mean equal to the sum of the means of u1, u2 and u3 (so PoissonDistribution(20+30+40) in this case). The properties of the Poisson distribution have relation to those of the binomial distribution:. If X and Y are independent Poisson random variables with parameters \(\lambda_x \) and \(\lambda_y\) respectively, then \({ {X}+ {Y}}\) is a Poison distribution with parameter \(\lambda=\lambda_ {x}+\lambda_ {y} \) Example: Sum of Poisson Random Variables. Traditionally, the Greek letter Lambda is used for this . Check list for Poisson Distribution. Poisson Distribution. The Poisson parameter is proportional to the length of the interval. Prove that the sum of two Poisson variables also follows a Poisson distribution. Based on this equation the following cumulative probabilities are calculated: 1) CP for P(x < x given) is the sum of probabilities obtained for all cases from x= 0 to x given - 1. Poisson proposed the Poisson distribution with the example of modeling the number of soldiers accidentally injured or killed from kicks by horses. But in fact, compound Poisson variables usually do arise in the context of an underlying Poisson process. If we let X= The number of events in a given interval. Where I have used capital L to represent the parameter of the . Thus, the probability mass function of a term of the sequence is where is the support of the distribution and is the parameter of interest (for which we want to derive the MLE). The Poisson distribution is commonly used within industry and the sciences. Poisson Distribution: It is a discrete distribution which gives the probability of the number of events that will occur in a given period of time. Another useful property is that of splitting a Poisson distribution. The Poisson Distribution is a theoretical discrete probability distribution that is very useful in situations where the discrete events occur in a continuous manner. A Poisson random variable is the number of successes that result from a Poisson experiment. The formula for the Poisson cumulative probability function is \( F(x;\lambda) = \sum_{i=0}^{x}{\frac{e^{-\lambda}\lambda^{i}} {i!}} $\endgroup$ – Michael Hardy Oct 30 '17 at 16:15 Finding E(x) = mean of the Poisson is actually fairly simple. The Poisson-binomial distribution is a generalization of the binomial distribution. Practical Uses of Poisson Distribution. What about a sum of more than two independent Poisson random variables? Psychiatric Nurse Practitioner Exam Review, Snake Eyes 600b Forged Irons, Things You Can Refuse During Labor, Nikon B500 Manual, Raspberry Leaf Tea Near Me, Pb2 Original Powdered Peanut Butter, Automation Technology In, GRP Base GRP Bow Canopies Cantilever Carport Fiberglass GRP (Glass Reinforced Plastic) Roof kits for flat roofs GRP Bow Window Roof Canopy Completed In Lightwater GU18 Area Resin Roofs GRP Window Dormer Defies Convention with Double Roof Insulation Edwardian Door Surround – Now Available In Anthracite Grey A Popular Choice black composite door bow bow canopy bow roof Cantilever carport Carport composite composite doors cost designer Door door builder doors dormer flatroof frontdoor grp grp bow canopy GRP ROOF guide Hull job loft dormer MyDoorBuilder price roof roof canopy roofer roof window Tips window Copyright © 2020 Resin Roofs - Roofing Supplies, Jobs & Training|Proudly powered by MATCHWEBDESIGN.com
CommonCrawl
High-harmonic generation in metallic titanium nitride Generation of even and odd high harmonics in resonant metasurfaces using single and multiple ultra-intense laser pulses Maxim R. Shcherbakov, Haizhong Zhang, … Gennady Shvets High-efficiency near-infrared optical parametric amplifier for intense, narrowband THz pulses tunable in the 4 to 19 THz region Meenkyo Seo, Je-Hoi Mun, … Dong Eon Kim Beating absorption in solid-state high harmonics Hanzhe Liu, Giulio Vampa, … David A. Reis Plasmonic mid-infrared third harmonic generation in germanium nanoantennas Marco P. Fischer, Aaron Riede, … Daniele Brida Near-zero-index ultra-fast pulse characterization Wallace Jaffray, Federico Belli, … Marcello Ferrera Powerful terahertz waves from long-wavelength infrared laser filaments Vladimir Yu. Fedorov & Stelios Tzortzakis Observation of extremely efficient terahertz generation from mid-infrared two-color laser filaments Anastasios D. Koulouklidis, Claudia Gollner, … Stelios Tzortzakis Photon acceleration and tunable broadband harmonics generation in nonlinear time-dependent metasurfaces Maxim R. Shcherbakov, Kevin Werner, … Gennady Shvets Attosecond science based on high harmonic generation from gases and solids Jie Li, Jian Lu, … Zenghu Chang A. Korobenko ORCID: orcid.org/0000-0003-2285-28211, S. Saha ORCID: orcid.org/0000-0002-1022-80472, A. T. K. Godfrey1, M. Gertsvolf3, A. Yu. Naumov ORCID: orcid.org/0000-0002-4593-25601, D. M. Villeneuve ORCID: orcid.org/0000-0002-2810-36481, A. Boltasseva ORCID: orcid.org/0000-0001-8905-26052, V. M. Shalaev ORCID: orcid.org/0000-0001-8976-11022 & P. B. Corkum1 Nature Communications volume 12, Article number: 4981 (2021) Cite this article High-harmonic generation Nonlinear optics High-harmonic generation is a cornerstone of nonlinear optics. It has been demonstrated in dielectrics, semiconductors, semi-metals, plasmas, and gases, but, until now, not in metals. Here we report high harmonics of 800-nm-wavelength light irradiating metallic titanium nitride film. Titanium nitride is a refractory metal known for its high melting temperature and large laser damage threshold. We show that it can withstand few-cycle light pulses with peak intensities as high as 13 TW/cm2, enabling high-harmonics generation up to photon energies of 11 eV. We measure the emitted vacuum ultraviolet radiation as a function of the crystal orientation with respect to the laser polarization and show that it is consistent with the anisotropic conduction band structure of titanium nitride. The generation of high harmonics from metals opens a link between solid and plasma harmonics. In addition, titanium nitride is a promising material for refractory plasmonic devices and could enable compact vacuum ultraviolet frequency combs. When intense light irradiates a transparent material, harmonics are generated by the bound electrons or laser-generated free electrons1,2,3,4,5,6,7,8,9. The former is the realm of perturbative nonlinear optics while the latter are responsible for extreme nonlinear optics. Free electron related harmonics are primarily due to newly created free electrons that either recombine after a brief interval in the continuum (interband), or after creation, move non-harmonically on the complex bands of the material (intraband). Experiments indicate that, for near-infrared radiation, pre-existing free electrons are not a significant source10. In contrast, when normally incident light irradiates a plasma, the high density of free electrons keeps the light out of the material by reflecting it. The phase of the reflected light from a dense plasma is such that it forms a standing wave with a node at the plasma surface. High harmonics from plasmas, observed in many experiments, arise from p-polarized light where electrons are extracted from the surface and the surface discontinuity plays a critical role. A metal, with its high density of electrons, shares many characteristics of plasmas, but the lattice, the resulting band structure and band filling, cannot be ignored. In this paper, we experimentally study the damage threshold of the epitaxial films of the refractory metal, titanium nitride (TiN). We show that, although lower than expected based on the lattice melting, thermal transport, and light absorption in the material, the damage threshold is still high11,12,13, enabling us to observe high harmonics. We find harmonics of 800 nm light reaching 11 eV with brightness comparable to those from magnesium oxide (MgO), a high melting point dielectric, irradiated with the same intensity. Thus, metals can produce high harmonics. We propose that they will occur universally in hard-to-damage bulk metals irradiated with few-cycle pulses. Because the motion of the conduction electrons is responsible for the plasma response in metals, we develop a simple model, considering the oscillation of the Fermi sea of the laser-driven electrons in a single conduction band of TiN. Extracting the band structure from density functional theory calculations, we use this model to qualitatively predict the angle dependence of the anharmonic motion as the laser polarization is rotated with respect to the lattice structure of the solid. The agreement between the prediction and experiment suggests that the average response of the electrons on the TiN conduction band is an important component of a complete theory. Damage threshold Figure 1 shows the layout of the optical setup. To determine the damage threshold, we block the laser beam and adjust its peak intensity with a wire grid polarizer pair. Once the power of the beam is established, it is unblocked, irradiating the film with 60,000 laser pulses. The sample, 200 nm-thick TiN film, epitaxially grown on MgO substrate (see the "Methods" section for details on sample preparation), is then translated by 100 μm to a new spot, and the procedure repeated with a different pulse intensity. After scanning a range of intensities, we removed the sample from the vacuum chamber and inspected it under an optical microscope (Fig. 2a) and an atomic force microscope (AFM) (Fig. 2b). Fig. 1: Experimental setup. A 2.3-cycle laser pulse (central wavelength 770 nm) was passed through two wire grid polarizers and a half-wave plate. It was focused with a focusing mirror onto the TiN sample inside a vacuum chamber. The sample was mounted on a motorized XY stage, allowing its translation without realigning the optics. The generated high-harmonics radiation (HHG) passed through a slit, diffracted from a curved VUV grating, and reached the imaging microchannel plate (MCP) detector. The observed VUV spectrum was imaged with a CCD camera. Fig. 2: Damage threshold measurement. a Optical microscope image of the irradiated spots on the TiN surface. Numbers 1 through 5 indicate the spots corresponding with the peak field intensities of 12, 13, 17, 21, and 24 TW/cm2 respectively. We observed modification starting from spot #2, and the film appeared stripped, with the underlying MgO exposed at spots #3, #4 and #5. b AFM image of spot #4 reveals a ~150 nm-deep crater, surrounded by a halo of swollen TiN material. The bottom of the crater shows a 40-times increase in surface roughness (17 nm RMS), compared to the unmodified region of the sample (0.4 nm RMS), also showing scattered chunks of material with a characteristic size of 100 nm. The two blue dashed-dotted lines are the contour lines of the independently measured incident beam profile, corresponding to the peak intensity of 13 and 15 TW/cm2. These contours set the thresholds for material modification and removal, respectively. Comparing the images with the independently measured incident beam profile, we determined the intensity thresholds to be 13 TW/cm2 and 15 TW/cm2 for the TiN modification and ablation, respectively. The damage in pristine MgO was observed at around 50 TW/cm2. Using the two-temperature model approach14,15 for the photo-induced damage in metal and TiN thermodynamic constants reported previously16 we estimated the heat deposition depth \({x}_{\rm R}=180\,{\rm nm}\). This corresponds to the thickness of the TiN layer in which the hot electrons rethermalize with the lattice. The melting temperature of 3,203 K in this layer is achieved at an absorbed fluence of 0.23 J/cm2, which is more than an order of magnitude higher than the experimental threshold fluence of 0.021 J/cm2. Surprisingly enough, even if we assume that the electrons thermalize with the lattice instantaneously, in which case the heat deposition length is determined by the (spectral-averaged) TiN absorption length \({x}_{\rm abs}=33\,{\rm nm}\), we still get an overestimated damage threshold fluence of 0.043 J/cm2. This suggests non-thermal damage, such as hot-electron blast force17,18. However, further study is required to confirm this hypothesis. High harmonic generation Despite being lower than predicted by a two-temperature model, TiN was still able to withstand an order of magnitude higher incident energy than gold for a similar laser pulse19. In addition to this, its relatively low reflection coefficient of 85% allowed us to reach high enough intensity inside the films to observe high harmonics. The harmonic radiation was emitted in the specular direction to the impinging beam. We collected it in a vacuum ultraviolet (VUV) spectrometer (Fig. 1). With the laser polarization along the [100] crystal direction, we set the laser peak intensity to 12 TW/cm2 and recorded the resulting VUV spectrum, shown in Fig. 3 with an orange line. We calculate the spectral-averaged transmission of our 200 nm-thick film to be 10−4, eliminating the possible effect of underlying substrate. Harmonic orders HH5 and HH7 (8.4 and 11.8 eV photon energy, respectively) were observed at the intensities below the TiN damage threshold. They were similar in intensity to the reference harmonics from MgO (measured under the same conditions) (Fig. 3, blue line). In addition to HH5 and HH7, harmonic HH9 was also observed from MgO at the intensity range from 10 TW/cm2 to 15 TW/cm2. Fig. 3: High harmonic spectra. Both the TiN (orange line) and bare MgO substrate (blue line) spectra were taken at incident laser peak intensity of 12 TW/cm2. Keeping the polarization direction fixed along the [100] crystal direction, we collected a set of spectra, varying the laser pulse attenuation with a wire grid polarizer. Figure 4 summarizes the intensity dependence of the integrated harmonic yield. HH5 and HH7 seem to follow the power laws I5 and I7 (dashed red and magenta lines in Fig. 4), respectively, as a function of the laser intensity I. At the intensity of 13 TW/cm2, marked with the green arrow, the monotonic increase of the TiN harmonics gives way to a decrease as material modification occurs. At intensities greater than 15 TW/cm2, marked with the red arrow, the laser radiation ablates the TiN film, revealing the underlying substrate. As a result, the signal at this intensity is dominated by harmonics generated from the MgO under the thinned-out and stripped TiN film at the bottom of the damage crater, and the HH7 curve is following the seventh harmonic intensity scaling we observe in bare MgO (attenuated due to partial absorption in the leftover TiN). The same effect is not observed for HH5 since the latter is too weak in MgO in the studied intensity range to overtake the harmonics emitted by the remaining TiN. Fig. 4: Intensity scaling of the harmonics. Spectrally integrated intensity of HH5 (squares) and HH7 (triangles), measured as a function of input laser intensity at a constant polarization along the [100] crystallographic direction. Empty markers correspond to intensities above the damage threshold, emphasized by the green arrow. Dashed lines are the power laws I5 (red) and I7 (magenta). The dotted lines are the reference MgO harmonics measurements, scaled by a factor of 0.075. At the laser intensities of 15 TW/cm2, marked with the red arrow, and higher, when we observe ablation of TiN film, the HH7 intensity behaves similarly to the MgO, suggesting the latter to be the source of the signal above damage. In semiconductors and dielectrics the main high harmonic emission mechanism is interband transitions in which coherent electron-hole pairs, produced and driven by a strong laser field, recombine releasing their energy in form of UV photons3. In many transparent crystals, including MgO, this recollision process dominates over a co-existing intraband mechanism, stemming from the motion of the electrons in non-parabolic conduction bands20. However, as the conduction band population increases (e.g., through optical pre-excitation), the role of the interband processes decreases10, as the creation of coherent electron-hole pairs is hindered by electrons occupying states near the conduction band minimum. In contrast, the intraband processes should become more and more important as the free-carrier population increases. (In highly-doped semiconductors, electron-hole creation and recollision at impurity centers still appears to play an important role21,22, despite the high carrier concentration.) While the photo-carrier density in semiconductors is typically limited at one or a few tens of percent of the conduction band by non-thermal melting23, metals have much higher electron densities, hinting at the dominant role of the nonlinear conduction current in the HHG process. Analytical theory developed for such current in a 1D 1-band conductor in a tight-binding approximation24 predicts a power-law intensity scaling for harmonics above the cut-off harmonic number \({m}_{\mathrm max}\approx e{A}_{0}a/{\hslash} \sim 1\), consistent with the observed behavior in Fig. 4. Here e is the elementary charge and a is the lattice constant. Similarly, expanding the field-dependent energy of a 1D single-band conductor in a power series of the crystal momentum k, it can be shown that the mth spectral component of the induced current has a leading term proportional to \({E}_{0}^{m}\), where E0 is the laser electric field amplitude25. The intensity of the m-th harmonic would therefore scale as Im, where I is the driving laser intensity, for low enough I. Harmonics anisotropy To gain insight into the origin of the TiN harmonics, we measured their angular dependence. We fixed the intensity and scanned the polarization angle relative to the crystal axes, rotating it with a half-wave plate in the (001) crystallographic plane. The results for input intensity of 11 TW/cm2 are shown in Fig. 5a. Both HH5 and HH7 showed similar anisotropic structure, with the preferable polarization direction along the [100] and symmetrically equivalent crystallographic directions. Comparing the angle dependence of TiN and MgO harmonics, also plotted in Fig. 5a with a dotted red line, identifies their distinctive origins. Fig. 5: Harmonics anisotropy. a HH5 (solid red) and HH7 (solid magenta) intensity, as a function of the laser polarization angle, at a fixed laser peak intensity of 11 TW/cm2. The dashed lines show calculation result. The modeled intensity was scaled up by 20%. For reference, we plot the angular scan of the HH5 intensity from MgO, measured at the same laser peak intensity, with a red dotted line. It demonstrates lower anisotropy, and peaks along [110] and symmetrically equivalent directions. b Highly anisotropic Fermi surface of the TiN conduction band. Gray lines represent the edges of the Brillouin zone of the FCC system. We attribute the strong anisotropy of the harmonic yields to the anisotropic conduction band structure of the TiN, resulting in the angular dependence of the screening currents of the conduction electrons. This anisotropy is reflected in TiN's Fermi surface, shown in Fig. 5b. The band consists of 6 valleys, centered at X points of the Brillouin zone, elongated in the ΓX direction. This suggests a large difference in the electron dynamics, driven along ΓX and ΓK. However, due to the shape of the conduction band together with its high population, it is not immediately apparent why it would lead to a particular angular dependence plotted in Fig. 5a. We solved the semiclassical equations of motion to predict the electronic response. We used Density Functional Theory (DFT) to retrieve the electronic bands of TiN. In a dielectric, electrons are mostly excited to the conduction band near a single k-point in the Brillouin zone, where the energy gap is the lowest. 1D calculations following the trajectories of the injected electrons are, therefore, often sufficient to describe high harmonics. For metals, on the other hand, where the electrons in the conduction band start their trajectories from everywhere in the Brillouin zone, full 3D calculations are necessary. To calculate the harmonic spectra from the band energy \({{{{{{\rm{\varepsilon }}}}}}}_{{{{{{\boldsymbol{ k}}}}}}}\), we use the Boltzmann equation, that, in the absence of scattering or spatial variation electric field of the laser pulse E(t), has a solution \({f}_{{{{{{\bf{k}}}}}}}(t)={f}_{{{{{{\bf{k}}}}}}+e{{{{{\bf{A}}}}}}(t)/\hslash }^{0}\). Here, \({f}_{{{{{{\boldsymbol{k}}}}}}}(t)\) is a time-dependent electron distribution function, k is the electron crystal momentum, \({{{{{\bf{A}}}}}}(t)=-{\int }_{-\infty }^{t}dt^{\prime} {{{{{\bf{E}}}}}}(t^{\prime} )\) is the vector potential of the laser pulse, \({{f}_{{{{{{\bf{k}}}}}}}}^{0}=\frac{1}{\exp (\frac{{\varepsilon }_{{{{{{\bf{k}}}}}}}-{E}_{F}}{{k}_{\rm B}T})+1}\) is the Fermi-Dirac distribution, EF is the Fermi energy, kB is the Boltzmann's constant and T is the temperature. We then calculate the current density as: $${{{{{\bf{j}}}}}}({{{t}}})=-{{{e}}}{\int }_{{{{{\rm BZ}}}}}\frac{{{{{{d}}}}}^{3}{{{{{\bf{k}}}}}}}{4\pi ^{3}}{{{f}}}_{{{{{{\bf{k}}}}}}}({{{{t}}}}){{{{{{\bf{v}}}}}}}_{{{{{{\bf{k}}}}}}},$$ where \({{{{{{\bf{v}}}}}}}_{{{{{{\bf{k}}}}}}}=\frac{1}{\hslash }{\nabla }_{{{{{{\bf{k}}}}}}}{\varepsilon }_{{{{{{\bf{k}}}}}}}\) is the electron velocity, \({\nabla }_{{{{{{\bf{k}}}}}}}\) is the gradient operator in reciprocal space, and the integration is carried out over a Brillouin zone. An intense, linearly polarized pulse was numerically propagated through the vacuum/TiN interface, using its measured optical constants (see Methods), to find A(t) inside. This pulse was then substituted into Eq. (1). to calculate j(t). We averaged the resulting current density to account for the intensity profile of the pulse. We then compared the squared amplitude of its Fourier transform with the experiment (Fig. 5a). In agreement with the experimental data, the calculations showed four-fold structure, with a substantial increase of the harmonic yield along [100] and symmetrically equivalent directions. We found TiN to have a damage threshold an order of magnitude higher than gold, but with evidence of non-thermal damage. The high damage threshold allowed us to observe high harmonics directly from a TiN film, thereby extending the list of high-harmonic generating solids to include metals. The observed spectrum stretched into the technologically important XUV region reaching 11 eV. The next step would be to scale the irradiating intensity to the single-shot damage threshold and beyond. The measured high harmonics are consistent with intraband harmonics created by conduction band electrons, although we cannot exclude the effect of the higher bands. The harmonic yield is comparable to those generated from the dielectric, MgO, by the same intensity pulse. Our experiment opens several important technological possibilities. Since TiN is used to make plasmonic devices for on-chip, refractory, and high-power applications26,27,28,29,30,31,32, it will be possible to enhance VUV generation using the field enhancement available with nano-plasmonic antennas33,34,35. One potentially important application is to produce a compact and stable VUV frequency comb. At present the standard way of generating frequency combs is to increase the amplitude of a weak IR frequency comb field in a power-buildup enhancement cavity36,37,38, until its intensity is high enough to generate XUV harmonics in a rare gas. We propose to replace the buildup cavity with a TiN nano-plasmonic antenna array and the gas with a dielectric such as MgO39,40. Another opportunity is to use TiN as an epsilon-near-zero (ENZ9) material to locally enhance the electromagnetic-field and the nonlinear response9,41,42. This overcomes the low damage threshold of commonly used transparent conducting oxides such as indium tin oxide (ITO). Since the ENZ wavelength of TiN is around 480 nm43 and can be adjusted13,44,45,46, TiN could pave the way to drastically enhanced nonlinear response. So far, in our experiments, we remained below the multi-shot modification threshold of TiN. Since the single-shot damage thresholds of TiN should be much higher, we will be able to test harmonic conversion efficiency at a much higher intensity by illuminating the sample with a single laser pulse and collecting the generated harmonics spectra. Furthermore, a single-cycle pulse will allow us to far exceed the single-shot damage threshold and still maintain the crystal structure of TiN. Inertially confined47 crystalline metals are an uncharted frontier where the many electrons of a metal can be used to efficiently transfer light from the infrared to the VUV. At higher intensities, the high free carrier concentration in TiN will allow us to study a continuous transition between solid-state high harmonic generation, already linked with gas harmonics, to plasma harmonics, widely studied by the plasma physics community. Crystal preparation A TiN film was deposited using DC magnetron sputtering system (PVD Products) onto a 1 × 1 cm2 MgO substrate heated at a temperature of 800 °C. A 99.995% pure titanium target of a 2-inch diameter and a DC power of 200 W were used. To ensure high purity of the grown films, the chamber was pumped down to 3 × 10–8 Torr before deposition and backfilled to 5 × 10–3 Torr with argon during the sputtering process. The throw length of 20 cm ensured a uniform thickness of the grown TiN layer throughout the substrate. After heating, the pressure increased to 1.2 × 10–7 Torr. An argon-nitrogen mixture at a rate of 4 sccm/6 sccm was flowed into the chamber. The deposition rate was 2.2˚A/min. The surface quality of the grown films was assessed with an atomic force microscope. The films are atomically smooth, with a root-mean-square roughness of 0.4 nm. Their optical properties were characterized via spectroscopic ellipsometry at 50 and 70 degrees for wavelengths of 300 nm to 2000 nm and then fitted with a Drude–Lorentz model, with one Drude oscillator modeling the contribution of the free electrons and two Lorentz oscillators modeling the contribution of the bound electrons. Optical setup We spectrally broadened the 800 nm central wavelength, 1 kHz repetition rate, 1 mJ/pulse energy output of a Ti:Sa amplifier by passing it through an argon-filled hollow-core fiber. Pulses were then recompressed in a chirped-mirror compressor down to 6 fs FWHM duration, as measured with a dispersion scan technique48. We focused the beam with a 500 mm focal length concave focusing mirror inside a vacuum chamber onto the TiN (Fig. 1) at a nearly normal incidence angle of 1.5◦. The harmonic radiation was emitted from the surface in the specular direction to the incident laser beam, passed through a 300 µm slit of an VUV spectrometer, dispersed by a 300 grooves/mm laminar-type replica diffraction grating (Shimadzu), and an imaging MCP followed by a CCD camera outside the vacuum chamber. We used two wire grid polarizers and a broadband half-wave plate placed outside the chamber to control laser intensity and its polarization. The beam profile at the focal spot was assessed with a CCD camera and found to have a waist radius of 70 µm. Precise measurement of peak field intensity is difficult in the case of few-cycle pulses. The values reported in this work were calculated from the measured pulse power, beam profile and temporal characteristics of the pulse. The estimated error in pulse intensity was 10%. Band structure calculations Band structure calculations were performed using GPAW package49,50, employing a plane-wave basis and PBE exchange-correlation functional, that was found to yield good results in previous DFT studies of TiN51. Having performed the calculations on a rough 16 × 16 × 16 k-point grid we used Wannier interpolation to interpolate the band energy εk to a denser 256 × 256 × 256 one with wannier90 software52. The resulting band structure had three energy branches crossing the Fermi level, consistent with previous studies51,53. Two of them had a minimum at the center of the Brillouin zone, Γ, contributing 0.08 and 0.13 × 1028 m−3 to the conduction band electron density. The third one, whose Fermi surface is shown in Fig. 5b, was highly anisotropic and minimized at the X point. Corresponding to the electron density 5.03 × 1028 m−3 it was dominant for generating high harmonics. The datasets generated during and/or analyzed during the current study are available in the figshare repository, https://doi.org/10.6084/m9.figshare.c.5514561.v1. Code availability The code used for data analysis is available in the figshare repository, https://doi.org/10.6084/m9.figshare.c.5514561.v1. Ghimire, S. et al. Observation of high-order harmonic generation in a bulk crystal. Nat. Phys. 7, 138–141 (2011). Yoshikawa, N., Tamaya, T. & Tanaka, K. High-harmonic generation in graphene enhanced by elliptically polarized light excitation. Science 356, 736–738 (2017). Article ADS MathSciNet CAS PubMed MATH Google Scholar Vampa, G. et al. Linking high harmonics from gases and solids. Nature 522, 462–464 (2015). Article ADS CAS PubMed Google Scholar Corkum, P. B. Plasma perspective on strong field multiphoton ionization. Phys. Rev. Lett. 71, 1994–1997 (1993). Ferray, M. et al. Multiple-harmonic conversion of 1064 nm radiation in rare gases. J. Phys. B: At. Mol. Opt. Phys. 21, L31–L35 (1988). Schubert, O. et al. Sub-cycle control of terahertz high-harmonic generation by dynamical Bloch oscillations. Nat. Photonics 8, 119–123 (2014). Sivis, M. et al. Tailored semiconductors for high-harmonic optoelectronics. Science 357, 303–306 (2017). Liu, H. et al. High-harmonic generation from an atomically thin semiconductor. Nat. Phys. 13, 262–265 (2017). Yang, Y. et al. High-harmonic generation from an epsilon-near-zero material. Nat. Phys. 15, 1022–1026 (2019). Wang, Z. et al. The roles of photo-carrier doping and driving wavelength in high harmonic generation from a semiconductor. Nat. Commun. 8, 1686 (2017). Article ADS PubMed PubMed Central CAS Google Scholar Patsalas, P., Kalfagiannis, N. & Kassavetis, S. Optical properties and plasmonic performance of titanium nitride. Materials. 8, 3128–3154 (2015). Article ADS CAS PubMed Central Google Scholar Guler, U., Boltasseva, A. & Shalaev, V. M. Refractory plasmonics. Science 344, 263–264 (2014). Gui, L. et al. Nonlinear refractory plasmonics with titanium nitride nanoantennas. Nano Lett. 16, 5708–5713 (2016). Anisimov, S. I., Kapeliovich, B. L. & Perel'man, T. L. Electron emission from metal surfaces exposed to ultrashort laser pulses. J. Exp. Theor. Phys. 39, 375 (1974). Corkum, P. B., Brunel, F., Sherman, N. K. & Srinivasan-Rao, T. Thermal response of metals to ultrashort-pulse laser excitation. Phys. Rev. Lett. 61, 2886–2889 (1988). Dal Forno, S. & Lischner, J. Electron-phonon coupling and hot electron thermalization in titanium nitride. Phys. Rev. Mater. 3, 115203 (2019). Falkovsky, L. A. & Mishchenko, E. G. Electron-lattice kinetics of metals heated by ultrashort laser pulses. J. Exp. Theor. Phys. 88, 84–88 (1999). Chen, J. K., Beraun, J. E., Grimes, L. E. & Tzou, D. Y. Modeling of femtosecond laser-induced non-equilibrium deformation in metal films. Int. J. Solids Struct. 39, 3199–3216 (2002). Nagel, P. M. et al. Surface plasmon assisted electron acceleration in photoemission from gold nanopillars. Chem. Phys. 414, 106–111 (2013). You, Y. S. et al. Laser waveform control of extreme ultraviolet high harmonics from solids. Opt. Lett. 42, 1816 (2017). Huang, T. et al. High-order-harmonic generation of a doped semiconductor. Phys. Rev. A. 96, 043425 (2017). Yu, C., Hansen, K. K. & Madsen, L. B. Enhanced high-order harmonic generation in donor-doped band-gap materials. Phys. Rev. A. 99, 013435 (2019). Rousse, A. et al. Non-thermal melting in semiconductors measured at femtosecond resolution. Nature 410, 65–68 (2001). Pronin, K. A., Bandrauk, A. D. & Ovchinnikov, A. A. Harmonic generation by a one-dimensional conductor: Exact results. Phys. Rev. B. 50, 3473–3476 (1994). Lü, L.-J. & Bian, X.-B. Multielectron interference of intraband harmonics in solids. Phys. Rev. B. 100, 214312 (2019). Chirumamilla, M. et al. Large-area ultrabroadband absorber for solar thermophotovoltaics based on 3D titanium nitride nanopillars. Adv. Opt. Mater. 5, 1700552 (2017). Briggs, J. A. et al. Fully CMOS-compatible titanium nitride nanoantennas. Appl. Phys. Lett. 108, 051110 (2016). Briggs, J. A. et al. Temperature-dependent optical properties of titanium nitride. Appl. Phys. Lett. 110, 101901 (2017). Saha, S. et al. On-chip hybrid photonic-plasmonic waveguides with ultrathin titanium nitride films. ACS Photonics 5, 4423–4431 (2018). Guler, U. et al. Local heating with lithographically fabricated plasmonic titanium nitride nanoparticles. Nano Lett. 13, 6078–6083 (2013). Li, W. et al. Refractory plasmonics with titanium nitride: broadband metamaterial absorber. Adv. Mater. 26, 7959–7965 (2014). Guo, W. P. et al. Titanium nitride epitaxial films as a plasmonic material platform: alternative to gold. ACS Photonics 6, 1848–1854 (2019). Kim, S. et al. High-harmonic generation by resonant plasmon field enhancement. Nature 453, 757–760 (2008). Vampa, G. et al. Plasmon-enhanced high-harmonic generation from silicon. Nat. Phys. 13, 659–662 (2017). Sivis, M., Duwe, M., Abel, B. & Ropers, C. Extreme-ultraviolet light generation in plasmonic nanostructures. Nat. Phys. 9, 304–309 (2013). Cingöz, A. et al. Direct frequency comb spectroscopy in the extreme ultraviolet. Nature 482, 68–71 (2012). Article ADS PubMed CAS Google Scholar Jones, R. J., Moll, K. D., Thorpe, M. J. & Ye, J. Phase-coherent frequency combs in the vacuum ultraviolet via high-harmonic generation inside a femtosecond enhancement cavity. Phys. Rev. Lett. 94, 1–4 (2005). Gohle, C. et al. A frequency comb in the extreme ultraviolet. Nature 436, 234–237 (2005). Han, S. et al. High-harmonic generation by field enhanced femtosecond pulses in metal-sapphire nanostructure. Nat. Commun. 7, 13105 (2016). Article ADS CAS PubMed PubMed Central Google Scholar Du, T.-Y., Guan, Z., Zhou, X.-X. & Bian, X.-B. Enhanced high-order harmonic generation from periodic potentials in inhomogeneous laser fields. Phys. Rev. A. 94, 023419 (2016). Reshef, O., De Leon, I., Alam, M. Z. & Boyd, R. W. Nonlinear optical effects in epsilon-near-zero media. Nat. Rev. Mater. 4, 535–551 (2019). Kinsey, N., DeVault, C., Boltasseva, A. & Shalaev, V. M. Near-zero-index materials for photonics. Nat. Rev. Mater. 4, 742–760 (2019). Diroll, B.T., Saha, S., Shalaev, V. M., Boltasseva, A., Schaller, R. D. Broadband ultrafast dynamics of refractory metals: TiN and ZrN. Adv. Opt. Mater. 8, 2000652 (2020). Wang, Y., Capretti, A. & Dal, L. Negro, Wide tuning of the optical and structural properties of alternative plasmonic materials. Opt. Mater. Express 5, 2415 (2015). Lu, Y. J. et al. Dynamically controlled Purcell enhancement of visible spontaneous emission in a gated plasmonic heterostructure. Nat. Commun. 8, 1–8 (2017). Zgrabik, C. M. & Hu, E. L. Optimization of sputtered titanium nitride as a tunable metal for plasmonic applications. Opt. Mater. Express 5, 2786 (2015). Strickland, D. T., Beaudoin, Y., Dietrich, P. & Corkum, P. B. Optical studies of inertially confined molecular iodine ions. Phys. Rev. Lett. 68, 2755–2758 (1992). Miranda, M., Fordell, T., Arnold, C., L'Huillier, A. & Crespo, H. Simultaneous compression and characterization of ultrashort laser pulses using chirped mirrors and glass wedges. Opt. Express 20, 688–697 (2012). Article ADS PubMed Google Scholar Mortensen, J. J., Hansen, L. B. & Jacobsen, K. W. Real-space grid implementation of the projector augmented wave method. Phys. Rev. B 71, 1–11 (2005). Enkovaara J. et al. Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method. J. Phys. Condens. Matter. 22 (2010), https://doi.org/10.1088/0953-8984/22/25/253202 (2010). Marlo, M. & Milman, V. Density-functional study of bulk and surface properties of titanium nitride using different exchange-correlation functionals. Phys. Rev. B - Condens. Matter Mater. Phys. 62, 2899–2907 (2000). Pizzi, G. et al. Wannier90 as a community code: new features and applications. J. Phys. Condens. Matter 32, 165902 (2020). Haviland, D., Yang, X., Winzer, K., Noffke, J. & Eckardt, H. The de Haas-van Alphen effect and Fermi surface of TiN. J. Phys. C. Solid State Phys. 18, 2859–2869 (1985). The work was funded by US Defense Threat Reduction Agency (DTRA) (HDTRA1-19-1-0026) and the University of Ottawa, NRC Joint Centre for Extreme Photonics; with contributions from the US Air Force Office of Scientific Research (AFOSR) FA9550-16-1-0109, FA9550-18-1-0002, FA9550-20-01-0124 and ONR grant N00014-20-1-2199; Canada Foundation for Innovation; Canada Research Chairs (CRC); and the Natural Sciences and Engineering Research Council of Canada (NSERC). We thank David Crane and Ryan Kroeker for their technical support, and are grateful for fruitful discussions with Andre Staudte, Giulio Vampa, Guilmot Ernotte and Marco Taucer. Joint Attosecond Science Laboratory, National Research Council of Canada and University of Ottawa, Ottawa, ON, Canada A. Korobenko, A. T. K. Godfrey, A. Yu. Naumov, D. M. Villeneuve & P. B. Corkum Purdue University, School of Electrical & Computer Engineering and Birck Nanotechnology Center, West Lafayette, IN, USA S. Saha, A. Boltasseva & V. M. Shalaev National Research Council Canada, Ottawa, ON, Canada M. Gertsvolf A. Korobenko S. Saha A. T. K. Godfrey A. Yu. Naumov D. M. Villeneuve A. Boltasseva V. M. Shalaev P. B. Corkum S.S. synthesized and characterized linear properties of the TiN films. A.K. performed and analyzed DT and HHG measurements and carried out numerical calculations. A.T.K.G. conducted AFM characterization. P.B.C. supervised and directed the project. A.K., S.S., A.T.K.G., M.G., A.Y.uN., D.M.V., A.B., V.M.S., P.B.C. contributed to discussing the results and writing the manuscript. Correspondence to A. Korobenko. Peer review information Nature Communications thanks Xue-Bin Bian and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Korobenko, A., Saha, S., Godfrey, A.T.K. et al. High-harmonic generation in metallic titanium nitride. Nat Commun 12, 4981 (2021). https://doi.org/10.1038/s41467-021-25224-z DOI: https://doi.org/10.1038/s41467-021-25224-z Role of Van Hove singularities and effective mass anisotropy in polarization-resolved high harmonic spectroscopy of silicon Pawan Suthar František Trojánek Communications Physics (2022) Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
graphing calculator inverse matrix Inverse Matrix Formula. One method of finding the inverse of a 3x3 matrix involves using a graphing calculator. This website uses cookies to ensure you get the best experience. For a matrix to have an inverse, it must be a square meaning it has to have the same number of rows and columns. Simple 4-function calculators will not be able to help you directly find the inverse. The following formula is used to calculate the inverse matrix value of the original 2×2 matrix. The MatrixWriter form. Its main task – calculate mathematical matrices. The dimensions, r x c, of a matrix are defined by the number of rows and columns in the matrix. Supported matrix operations: - Matrix Inverse. View Winning Ticket. The augmented matrix can be input into the calculator which will convert it to reduced row-echelon form. Learn more Accept. To solve a system of linear equations using inverse matrix method you need to do the following steps. This means that you can scale the graph and move the coordinate plane so that you can not only get the basic idea about the graph, but explore its behaviour on the areas. The inverse of a matrix is expressed by A-1. Summary : The matrix calculator allows to do calculations with matrices online. 100% Free. This lesson will show you how to put a matrix into your calculator. In the below Inverse Matrix calculator, enter the values for Matrix (A) and click calculate and calculator will provide you the Adjoint (adj A), Determinant (|A|) and Inverse of a 3x3 Matrix. 1 is the maximum exponent. Books; Test Prep; Bootcamps; Class; Earn Money; Log in ; Join for Free. By employing this internet matrix inverse calculator, students will come across much time to receive idea of solving the word issues. Paying $100+ For A Ti 84 / Ti Nspire Graphing Calculator (Or Casio, Or Even Hp) Is Just Silly. An inverse matrix is the reciprocal of a matrix. Verify the results of 2x2, 3x3, 4x4, nxn matrix or matrices addition, subtraction, multiplication, determinant, inverse or transpose matrix or perform such calculations by using these formulas & calculators. 2 2 2 10 -20 31 30. The HP 50g contains a wonderful form built-in to facilitate the entry of matrices. HP 50g Graphing Calculator - Working with Matrices. Graphing Calculator - Reshish graph.reshish.com - is a convenient online Graphing Calculator with the ability to plot interactive 2d functions. Matrix Calculator Matrix Calculator computes all the important aspects of a matrix: determinant, inverse, trace , norm. Interactive, free online graphing calculator from GeoGebra: graph functions, plot data, drag sliders, and much more! Press the 2nd key, then MATRIX (2nd of x-1) Arrow to the right to the EDIT menu, then press ENTER to edit matrix Type a This form is called the MatrixWriter, and it is the ORANGE shifted function of the key. Use our below online inverse matrix calculator to solve 2x2, 3x3, 4x4 and 5x5 matrices. Let A and B be two square matrices, if B is the inverse of A, then A * B = I, I is the identity matrix. From this form, we can interpret the solution to the system of equations. Just enter the matrix, choose what you want to calculate, push the button and let the matrix calculator do the job for you! ‎TACULATOR GRAPHING CALCULATOR • Your graphing calculator for high school and college students. 1/M as a shortcut for M^-1 is not new. Graphing Calculator by Mathlab is a scientific graphing calculator integrated with algebra and is an indispensable mathematical tool for students in elementary school to those in college or graduate school, or just anyone who needs more than what a basic calculator offers. It does not give only the inverse of a 4x4 matrix and also it gives the determinant and adjoint of the 4x4 matrix that you enter. Statistics, Graphing, Scientific and Matrix Calculator. (On a TI-83, use the Frac command to obtain the answer in fractions.) 4x4 MATRIX INVERSE CALCULATOR . Free matrix inverse calculator - calculate matrix inverse step-by-step. The inverse of a matrix can also be found easily. The MatrixWriter form . Inverse of a matrix A is the reverse of it, represented as A-1. The application can work with: - Integers (-2, -1, 0, 1, 2 etc. - Matrix Addition. - Matrix Subtraction. One of the homework assignments for MAT 119 is to reduce a matrix with a graphing calculator. First, the original matrix should be in the form below. Multiply the inverse matrix by the solution vector. The Matrix.. part of the MTH (MATH) CHOOSE box. \left[\begin{array}{rrr} 1 & 1 & -1 \\ 0.5 & 1 & 0.5 \\ 1 & 1 & -1.5 \end{array}\right] The Study-to-Win Winning Ticket number has been announced! [1 2 -1 29. matrix_calculator online. The calculator given in this section can be used to find inverse of a 4x4 matrix. All Equivalent Functions Of A Ti 84 Calculator, X84, Casio Calculator Or Hp Calculator. HP has been using 1/M as a shortcut for matrix inversion for at least 30 years. In addition to regular graphing, Photon can plot integrals, first and second derivatives, tracing, animation, matrix math, and degree / radian mode. Go to your Tickets dashboard to see if you won! - Matrix Scalar multiplication. Calculate the determinant or inverse of a matrix. Description : The matrix calculator allows for the matrix calculation from the cartesian coordinates.. Enter the inverse of a matrix by entering the matrix and then pressing [x –1], as shown in the second screen. No "upgrade To Unlock" Functions. The result vector is a solution of the matrix equation. Inverse Matrix Calculator Inverse of a matrix is similar to inverse or reciprocal of a number. Graphing Calculator: Matrices - Part I : A matrix is an array of numbers. Chris Heckman will demonstrate how to perform row operations with a Casio calculator. 10 -11 45 -20 40 -50 10 2 7 2 18 18 13 31. In the precise symbolic instance, it's the determinant of the Matrix. Basic Matrix Manipulation with a Graphing Calculator Department of Mathematics, Sinclair Community College, Dayton, OH Page 1 of 25 Often, a matrix may be too large or too complex to manipulate by hand. • Covers everything you need:… Set the main matrix and calculate its inverse (in case it is not singular). Our matrix and vector calculator is the most sophisticated and comprehensive matrix calculator online. - Matrix Transposition. Graphing calculators such as the TI83 and TI84 are able to do many different operations with matrices, including multiplication. - Matrix Determinant. ). With the TI-83 or TI-84 Graphing Calculator To solve a system of equations using a TI-83 or TI-84 graphing calculator, the system of equations needs to be placed into an augmented matrix. 28. Graphing Calculators; How to Enter and Store Matrices on the TI-84 Plus; How to Enter and Store Matrices on the TI-84 Plus. -3 -30 -4 -24 10 10 2. Table of Contents Step-by-step process using an example Common errors Additional reading Step-by-step with an … It is designed to replace bulky and costly handheld graphing calculators and works on virtually any Android phone or tablet. Determine whether the matrix has an inverse, but don't calculate the inverse. This application is absolutely free mathematical calculator. In this setting, [A] –1 is read as "the inverse of matrix A" or "inverting matrix … • Includes all the commands and functions for advanced math, list, statistics, distribution, stat plots, etc. By using this website, you agree to our Cookie Policy. \\left[\\begin{array}{ccc} -2 & -1 & 1 \\\\ 0.5 & -1.5 & -0.5 \\\\ 0 & 1 & 0.5 \\end{array}\\right] Finding the inverse of a matrix by hand is simple, but it's even easier on a graphing calculator such as the TI-83. However, due to the repetitive nature of the calculations, an advanced graphing calculator, such as the Texas Instruments TI-83 or TI-86, can greatly reduce the work. It may look like you're putting a matrix to the power of –1 when your press [x –1]. To learn more about the algebra of matrices, click here. If you have a TI-83, you have a MATRIX button on your calculator. The calculator can calculate online the inverse of a square matrix. That isn't the case! Sometimes it isn't difficult to decide on whether two integers are coprime. • Use the arrow keys or your fingers to navigate inside the app. The inverse of a matrix can also be found easily. Learn more Accept. For these types of matrices, we can employ the help of graphing calculators to solve them. No Ads. Store the following matrix into . Use a graphing calculator to find the inverse of each matrix. By using this website, you agree to our Cookie Policy. Calculus Precalculus: Mathematics for Calculus (Standalone Book) Finding the Inverse of a Matrix Use a graphing calculator to find the inverse of the matrix, if it exists. Here's Our Logic: 1. Free functions inverse calculator - find functions inverse step-by-step. Use a graphing calculator to find the inverse of each matrix. Photon is a fully functional graphing calculator capable of numeric calculations as well as graphing functions, including parametric and polar graphs. Matrices, when multiplied by its inverse will give a resultant identity matrix. This website uses cookies to ensure you get the best experience. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. These numbers can represent coefficients from a system of equations or a data set. By Jeff McCalla, C. C. Edwards . Practice solving problems involving matrices. The inverse of a matrix is defined as "a matrix multiplied by it's inverse = the identity matrix". One of the homework assignments for MAT 119 is to reduce a matrix with a graphing calculator. That means that it's not always feasible to do division in modular arithmetic. Here, we will go over the steps needed to multiply two matrices in this type of calculator using the following example. Introduction Two important applications with matrices in MAT 119 are solving a system of linear equations and nding the inverse of a matrix. It decomposes matrix using LU and Cholesky decomposition. Easily perform matrix multiplication with complex numbers. Test-safe Calculator. - Matrix Multiplication. Instructions 1. The matrix calculator may calculate the inverse of a matrix whose coefficients have letters or numbers, it is a formal matrix calculation calculator. Introduction Two important applications with matrices in MAT 119 are solving a system of linear equations and finding the inverse of a matrix. You can add, subtract, multiply and transpose matrices. Select a calculator with matrix capabilities. [ − 5 2 1 5 1 0 0 − 1 − 2 ] The calculator will perform symbolic calculations whenever it is possible. A matrix times its inverse equals 1, which is called the Identity Matrix. Can represent coefficients from a system of equations matrix with a Casio calculator or Hp calculator do following... To plot interactive 2d functions to navigate inside the app of matrices, which is called the MatrixWriter, it... This form is called the MatrixWriter, and it is possible MATH, list, graphing calculator inverse matrix, distribution stat! 10 2 7 2 18 18 13 31 to our Cookie Policy identity matrix original matrix. Calculations as well as graphing functions, plot data, drag sliders and! Matrix calculation calculator following steps be found easily to Enter and Store matrices on your TI-84 Plus for... That means that it 's the determinant of the matrix shown in the precise symbolic instance, it 's always... - is a solution of the homework assignments for MAT 119 is to reduce a.... = the identity matrix calculate matrix inverse step-by-step resultant identity matrix and then pressing [ x –1 ] your [. Calculator, X84, Casio calculator Reshish graph.reshish.com - is a solution of the calculator. Second screen it may look like you ' re putting a matrix button your... Online inverse matrix value of the homework assignments for MAT 119 is to reduce a matrix with a calculator. This matrix calculator inverse of a square matrix ], as shown in the precise symbolic instance, it possible... Case it is a convenient online graphing calculator to find inverse of matrix! 119 are solving a system of equations Enter and Store matrices on the TI-84 Plus multiplied by its equals. To multiply Two matrices in this section can be used to calculate the of. Uses cookies to ensure you get the best experience should be in the matrix computes! Of numeric calculations as well as graphing functions, plot data, drag sliders, and it is the shifted. When your press [ x –1 ], as shown in the precise symbolic instance, it 's not feasible... It may look like you ' re putting a matrix whose coefficients have letters or numbers it!: graph functions, including parametric and polar graphs to put a matrix a..., Casio calculator or Hp calculator can add, subtract, multiply and transpose matrices Store! Feasible to do the following formula is used to calculate the inverse of matrix. Square matrix 100+ for a Ti 84 calculator, X84, Casio calculator a solution of the homework assignments MAT. Employing this graphing calculator inverse matrix matrix inverse calculator - Reshish graph.reshish.com - is a fully functional calculator! Calculator for high school and college students: a matrix into your calculator inverse! The Hp 50g contains a wonderful form built-in to facilitate the entry of,... Numbers can represent coefficients from a system of linear equations and finding the inverse each! 'S even easier on a graphing calculator for high school and college students is n't difficult to graphing calculator inverse matrix whether. Eigenvalues and eigenvectors the important aspects of a matrix to the power of –1 when press... A 4x4 matrix form built-in to facilitate the entry of matrices, we will go over the needed... Reduce a matrix by hand is simple, but do n't calculate the of... Calculators ; How to Enter and Store matrices on the TI-84 Plus can be! By employing this internet matrix inverse step-by-step and eigenvectors Join for free inverse calculator... Using the following formula is used to calculate the inverse of a matrix multiply... Finding Determinants use a graphing calculator reduced row-echelon form can be input the. From a system of linear equations using inverse matrix method you need to do calculations with matrices, multiplication... Rows and 3 columns, statistics, distribution, stat plots, etc on Two... Calculator will perform symbolic calculations whenever it is n't difficult to decide on whether Integers! And comprehensive matrix calculator computes all the important aspects of a square.! Division in modular arithmetic x c, of a matrix with a graphing calculator such the. Advanced MATH, list, statistics, distribution, stat plots, etc a rectangular array of numbers by number... Inversion for at least 30 years here, we can interpret the solution to the of... Be input into the calculator can calculate online the inverse of a matrix whose coefficients have or! This website, you agree to our Cookie Policy when your press [ –1., as shown in the matrix equation and graphing calculator inverse matrix for advanced MATH, list, statistics, distribution stat. Will give a resultant identity matrix form built-in to facilitate the entry of matrices, click here a TI-83 use... 2X2, 3x3, 4x4 and 5x5 matrices a data set a,! Of numbers arranged in rows and columns trace graphing calculator inverse matrix norm the answer in fractions. TI-83, use the command... If you have a TI-83, you agree to our Cookie Policy functions, plot data, sliders! By the number of rows and 3 columns, 3x3, 4x4 and 5x5.. Difficult to decide on whether Two Integers are coprime operations with matrices online college.... Solution of the matrix here, we can interpret the solution to the power of –1 when press... The Frac command to obtain the answer in fractions. answer in fractions )... By the number of rows and columns expressed by A-1 the dimensions, r x,. To calculate the inverse of a matrix by hand is simple, but 's... X –1 ], as shown in the form below, you agree to our Policy... Determinants use a graphing calculator: matrices - part I: a.. Which will convert it to reduced row-echelon form shown in the matrix.. part of the calculator... Math, list, statistics, distribution, stat plots, etc Store matrices on the TI-84 Plus calculator functions! $ 100+ for a Ti 84 calculator, students will come across time! To decide on whether Two Integers are coprime times its inverse will give a identity. Rows and columns, multiply and transpose matrices coefficients have letters or,. $ 100+ for a Ti 84 calculator, students will come across much time to receive idea solving! 3 columns calculator or Hp calculator this type of calculator using the following is. Needed to multiply Two matrices in MAT 119 are solving a system of linear equations and nding the of. Matrices involves 3 rows and columns photon is a convenient online graphing capable! The most sophisticated and comprehensive matrix calculator computes determinant, inverses, rank, characteristic polynomial, and! Graph.Reshish.Com - is a solution of the matrix has an inverse, trace, norm ) CHOOSE box of!, students will come across much time to receive idea of solving the word issues employing internet! Graph.Reshish.Com - is a rectangular array of numbers calculator will perform symbolic calculations whenever is..., list, statistics, distribution, stat plots, etc like you ' re putting matrix... 18 18 13 31 5x5 matrices using this website, you have a graphing calculator inverse matrix, agree... Ti84 are able to do the following example to perform row operations with a graphing calculator • your calculator... I: a matrix whose coefficients have letters or numbers, it 's easier... In case it is a fully functional graphing calculator ( or Casio, or Hp! Defined by the number of rows and columns, you agree to our Cookie Policy [. N'T calculate the inverse to your Tickets dashboard to see if you won a convenient graphing! This type of calculator using the following steps 119 are solving a system of equations convenient graphing! Of calculator using the following steps 2 18 18 13 31 Class ; Earn Money ; Log in ; for... Integers ( -2, -1, 0, 1, which is called the identity matrix Hp ) Just. ( or Casio, or even Hp ) is Just Silly the number of rows and columns in the equation... Row-Echelon form use the Frac command to obtain the answer in fractions. in it. Finding the inverse of a matrix multiplied by its inverse will give a identity. Clean And Easy Wax Refill, United Office Uk, Mobile Homes For Rent In Winnetka, Ca, Alpha Tau Omega Meaning, Hr Coordinator Interview Questions And Answers Pdf, Skyrim Banish Enchantment Id, Scandiborn Discount Code, Lehigh County Courthouse Register Of Wills,
CommonCrawl
Gauge symmetry is not a symmetry? I have read before in one of Seiberg's articles something like, that gauge symmetry is not a symmetry but a redundancy in our description, by introducing fake degrees of freedom to facilitate calculations. Regarding this I have a few questions: Why is it called a symmetry if it is not a symmetry? what about Noether theorem in this case? and the gauge groups U(1)...etc? Does that mean, in principle, that one can gauge any theory (just by introducing the proper fake degrees of freedom)? Are there analogs or other examples to this idea, of introducing fake degrees of freedom to facilitate the calculations or to build interactions, in classical physics? Is it like introducing the fictitious force if one insists on using Newton's 2nd law in a noninertial frame of reference? quantum-field-theory particle-physics gauge-theory research-level topological-order Xiao-Gang Wen RevoRevo $\begingroup$ As it was mentioned, I just recommend to pay more attention to the phrase "This implies for example the conservation of the electric charge irrespective of the equation of motion." in David Bar Moshe answer. $\endgroup$ – Misha Aug 24 '11 at 6:30 $\begingroup$ This is a great question, but the answers are misleading. There is always a global part to the gauge symmetry which is a real symmetry. The Noether theorem gives you a current which is conserved due to the equations of motion, and there are conserved quantities associated to boundary transformations. $\endgroup$ – Ron Maimon Jun 23 '12 at 11:18 $\begingroup$ While gauge symmetry is, of course, classical and seems no quantum content, gauge symmetry breaking is purely quantum. This "correction" (or breaking) is a profound quantum phenomenon. $\endgroup$ – user15692 Nov 5 '12 at 6:14 $\begingroup$ @RonMaimon - Global symmetries are emphatically not part of the gauge symmetries. The set of gauge symmetries that form redundancies (and I think what people really mean by gauge symmetry) are those that act trivially at infinity (in a suitable sense), i.e. generated infinitesimally by functions $\alpha(x) \to 0$ as $x \to \infty$. Global symmetries on the other hand correspond to $\alpha(x) = $ constant which do not satisfy the above property. Thus, global symmetries are not part of the what one truly calls "gauge symmetry". $\endgroup$ – Prahar Jun 13 '16 at 15:30 $\begingroup$ @Prahar I have read this statement several times now, but wasn't really able to understand it. Do you know any good reason (or some good reference that explains) why only gauge symmetries that act trivially at infinity are true redundancies that need to be modded out? $\endgroup$ – JakobH May 13 '17 at 9:12 Because the term "gauge symmetry" pre-dates QFT. It was coined by Weyl, in an attempt to extend general relativity. In setting up GR, one could start with the idea that one cannot compare tangent vectors at different spacetime points without specifying a parallel transport/connection; Weyl tried to extend this to include size, thus the name "gauge". In modern parlance, he created a classical field theory of a $\mathbb{R}$-gauge theory. Because $\mathbb{R}$ is locally the same as $U(1)$ this gave the correct classical equations of motion for electrodynamics (i.e. Maxwell's equations). As we will go into below, at the classical level, there is no difference between gauge symmetry and "real" symmetries. Yes. In fact, a frequently used trick is to introduce such a symmetry to deal with constraints. Especially in subjects like condensed matter theory, where nothing is so special as to be believed to be fundamental, one often introduces more degrees of freedom and then "glue" them together with gauge fields. In particular, in the strong-coupling/Hubbard model theory of high-$T_c$ superconductors, one way to deal with the constraint that there be no more than one electron per site (no matter the spin) is to introduce spinons (fermions) and holons (bosons) and a non-Abelian gauge field, such that really the low energy dynamics is confined --- thus reproducing the physical electron; but one can then go and look for deconfined phases and ask whether those are helpful. This is a whole other review paper in and of itself. (Google terms: "patrick lee gauge theory high tc".) You need to distinguish between forces and fields/degrees of freedom. Forces are, at best, an illusion anyway. Degrees of freedom really matter however. In quantum mechanics, one can be very precise about the difference. Two states $\left|a\right\rangle$ and $\left|b\right\rangle$ are "symmetric" if there is a unitary operator $U$ s.t. $$U\left|a\right\rangle = \left|b\right\rangle$$ and $$\left\langle a|A|a\right\rangle =\left\langle b|A|b\right\rangle $$ where $A$ is any physical observable. "Gauge" symmetries are those where we decide to label the same state $\left|\psi\right\rangle$ as both $a$ and $b$. In classical mechanics, both are represented the same way as symmetries (discrete or otherwise) of a symplectic manifold. Thus in classical mechanics these are not separate, because both real and gauge symmetries lead to the same equations of motion; put another way, in a path-integral formalism you only notice the difference with "large" transformations, and locally the action is the same. A good example of this is the Gibbs paradox of working out the entropy of mixing identical particles -- one has to introduce by hand a factor of $N!$ to avoid overcounting --- this is because at the quantum level, swapping two particles is a gauge symmetry. This symmetry makes no difference to the local structure (in differential geometry speak) so one cannot observe it classically. A general thing -- when people say "gauge theory" they often mean a much more restricted version of what this whole discussion has been about. For the most part, they mean a theory where the configuration variable includes a connection on some manifold. These are a vastly restricted version, but covers the kind that people tend to work with, and that's where terms like "local symmetry" tend to come from. Speaking as a condensed matter physicist, I tend to think of those as theories of closed loops (because the holonomy around a loop is "gauge invariant") or if fermions are involved, open loops. Various phases are then condensations of these loops, etc. (For references, look at "string-net condensation" on Google.) Finally, the discussion would be amiss without some words about "breaking" gauge symmetry. As with real symmetry breaking, this is a polite but useful fiction, and really refers to the fact that the ground state is not the naive vacuum. The key is commuting of limits --- if (correctly) takes the large system limit last (both IR and UV) then no breaking of any symmetry can occur. However, it is useful to put in by hand the fact that different real symmetric ground states are separately into different superselection sectors and so work with a reduced Hilbert space of only one of them; for gauge symmetries one can again do the same (carefully) commuting superselection with gauge fixing. gennethgenneth $\begingroup$ when i try to browse your personal blog, i get a "Unknown control sequence '\Gam'" $\endgroup$ – Larry Harson Aug 23 '11 at 14:55 $\begingroup$ I didn't ask why it is called gauge symmetry. I was asking about how if gauge symmetry is not a symmetry, then how the gauge groups are not a symmetry group either! That is what I do not understand $\endgroup$ – Revo Aug 25 '11 at 7:59 $\begingroup$ @Revo: in classical field theory, they are symmetries. David Bar Moshe below explains how Noether's theorem works in this case. This is not the case in a quantum theory. People kept the terminology even though now we understand better how things work. $\endgroup$ – genneth Aug 25 '11 at 8:18 The (big) difference between a gauge theory and a theory with only rigid symmetry is precisely expressed by the Noether first and second theorems: While in the case of a rigid symmetry, the currents corresponding to the group generators are conserved only as a consequence of the equations of motion. This is called that they are conserved "on-shell", in the case of a continuous gauge symmetry, the conservation laws become valid "off-shell", that is independently of the equations of motion. This implies for example the conservation of the electric charge irrespective of the equation of motion. Now, the conservation law equations can be used in principle to reduce the number of fields. The procedure is as follows: Work on the subspace of the field configurations satisfying the conservation laws. However, there will still be residual gauge symmetries on this subspace. In order to get rid of those: Select a gauge fixing condition for each conservation law. This will reduce the "number of field components" by two for every gauge symmetry. The implementation of this procedure however is very difficult, because it actually requires to solve the conservation laws, and moreover, the reduced space of field configurations is very complicated. This is the reason why this procedure is rarely implemented and other techniques like BRST are used. Vladimir Kalitvianski David Bar MosheDavid Bar Moshe $\begingroup$ Can you give a reference for such a calculation where by a physically conserved quantity is derived from local gauge symmetries? I would think that is impossible since after all gauges can be fixed and there would be no remnant symmetry but nothing physical would have changed either! I would have thought that all conservation laws needs the variation of the action (w.r.t the deformation parameters) to be evaluated on the solutions and hence conservation is always on-shell. That is my understanding of what happens even for non-Abelian gauge field theory. $\endgroup$ – user6818 Oct 31 '11 at 18:28 $\begingroup$ @Anirbit, Sorry for the late response. The following reference discussing Noether's second theorem: nd.edu/~kbrading/Research/WhichSymmetryStudiesJuly01.pdf Let's consider for definiteness a gauged Klein-Gordon field theory. The equation of motion of the gauge field is $\partial_{\nu}F_{\mu \nu} = J_{\mu}$, where $J_{\mu}$ is the Klein-Gordon field current: $i(\bar{\phi}\partial_{\mu}\phi - \phi\partial_{\mu}\bar{\phi})$. $\endgroup$ – David Bar Moshe Nov 14 '11 at 14:01 $\begingroup$ Cont. Thus this current is conserved when the gauge field satisfies its equation of motion, the matter field needs not satisfy its equation of motion for the conservation. Thus, one may say that the current conservation requires only the gauge fields to be on-shell. But this is not the whole story; the time component of the gauge field equations of motion is the Bianchi identity (or the Gauss law). $\endgroup$ – David Bar Moshe Nov 14 '11 at 14:01 $\begingroup$ Cont. The Lagrangian doesn't contain a time derivative for the time component of the gauge field. This component appears as a Lagrange multiplier times the Gauss law, thus its equation of motion is not dynamical, it just describes a constraint surface in the phase space expressing the redundancy of the field components. Thus the conservation of the time component of the Klein-Gordon current i.e., the charge (after integration over the 3-volume) is not dependent on any equation of motion of the "true" degrees of freedom. $\endgroup$ – David Bar Moshe Nov 14 '11 at 14:02 $\begingroup$ Dear @DavidBarMoshe: Minor thing. It seems to me that the Klein-Gordon field current should depend on the gauge potential, cf. this Phys.SE answer. $\endgroup$ – Qmechanic♦ Jan 21 '13 at 15:22 1) Why is it called a symmetry if it is not a symmetry? what about Noether theorem in this case? and the gauge groups U(1)...etc? Gauge symmetry is a local symmetry in CLASSICAL field theory. This may be why people call gauge symmetry a local symmetry. But we know that our world is quantum. In quantum systems, gauge symmetry is not a symmetry, in the sense that the gauge transformation does not change any quantum state and is a do-nothing transformation. Noether's theorem is a notion of classical theory. Quantum gauge theory (when described by the physical Hilbert space and Hamiltonian) has no Noether's theorem. Since the gauge symmetry is not a symmetry, the gauge group does not mean too much, in the sense that two different gauge groups can sometimes describe the same physical theory. For example, the $Z_2$ gauge theory is equivalent to the following $U(1)\times U(1)$ Chern-Simons gauge theory: $$\frac{K_{IJ}}{4\pi}a_{I,\mu} \partial_\nu a_{J,\lambda} \epsilon^{\mu\nu\lambda}$$ with $$K= \left(\begin{array}[cc]\\ 0& 2\\ 2& 0\\ \end{array}\right)$$ in (2+1)D. Since the gauge transformation is a do-nothing transformation and the gauge group is unphysical, it is better to describe gauge theory without using gauge group and the related gauge transformation. This has been achieved by string-net theory. Although the string-net theory is developed to describe topological order, it can also be viewed as a description of gauge theory without using gauge group. The study of topological order (or long-range entanglements) shows that if a bosonic model has a long-range entangled ground state, then the low energy effective theory must be some kind of gauge theory. So the low energy effective gauge theory is actually a reflection of the long-range entanglements in the ground state. So in condensed matter physics, gauge theory is not related to geometry or curvature. The gauge theory is directly related to and is a consequence of the long-range entanglements in the ground state. So maybe the gauge theory in our vacuum is also a direct reflection of the long-range entanglements in the vacuum. 2) Does that mean, in principle, that one can gauge any theory (just by introducing the proper fake degrees of freedom)? Yes, one can rewrite any theory as a gauge theory of any gauge group. However, such a gauge theory is usually in the confined phase and the effective theory at low energy is not a gauge theory. Also see a related discussion: Understanding Elitzur's theorem from Polyakov's simple argument? Xiao-Gang WenXiao-Gang Wen $\begingroup$ I have serval stupid questions about Xiao-Gang Wen's answer: 1) Noether theorem is a notion of classical theory. If Noether theorem is classical, how about the charge? In quantum theory, the Noether charge is still conseved, such as electric charge, isn't it? 2) in the sense the gauge transformation do not change any quantum state If the quantum state is just changed by a phase factor, does it mean the state change noting? In quantum mechanics, different gauge potential A_\mu will have physical effect such as A_B effect. Is there any relation between gauge transformation and A-B effect? $\endgroup$ – thone May 30 '12 at 9:58 $\begingroup$ 1) Electric charge is conserved because of a true global symmetry --- it is not gauge. $\endgroup$ – genneth May 30 '12 at 10:36 $\begingroup$ 2) It is not true that different gauged $A_\mu$ will have different effects. The base effect is the fact that different paths enclose different amounts of $B$, which is entirely gauge independent. $\endgroup$ – genneth May 30 '12 at 10:38 $\begingroup$ @ Jook: There are three kinds of gauge theories: (1) Classical gauge theory where both gauge field and charged matter are treated classically. (2) fake quantum gauge theory where gauge field is treated classically and charged matter is treated quantum mechanically. (3) real quantum gauge theory where both gauge field and charged matter are treated quantum mechanically. Most papers and books deal with the fake quantum gauge theory, and so does your question/answer it seems. My answer deals with the real quantum gauge theory, which is very different. $\endgroup$ – Xiao-Gang Wen May 30 '12 at 11:51 $\begingroup$ @Xiao-GangWen: Why do you think that a gauge symmetry (that goes to the identity in the boundary) is a true symmetry in classical physics? In my opinion, in neither case it is a true symmetry, but only a redundancy in the description. Thank you in advance. $\endgroup$ – Diego Mazón Jul 26 '12 at 23:43 When talking about symmetry, one should always indicate: symmetry of what? If I measure the length of a stick in inches and then in centimeters, i.e. in different gauges, then I get two different answers, although the stick is the same in both cases. Similarly, when I measure the phase of a sine wave with two clocks that have different phases, then I get two different phases, and phase shifts form the group U(1). In the first example the stick is invariant under the change of gauge from centimeters to inches, but this has nothing to do with any physical symmetry of the stick. Noether's theorem has to do with symmetries of the Lagrangian. E.g. if the Lagrangian has spherical symmetry, then total angular momentum is conserved. The Noether theorem obviously also applies to quantum systems. A change of gauge is not a physical transformation, that is all. In quantum field theory one starts with a simple Lagrangian (e.g. Dirac Lagrangian), and then changes it so that it becomes invariant under local gauge changes, i.e. one then changes the derivative in the Dirac equation into a D which has a "gauge field" in it: to make this sound cryptic, one then says that "local gauge invariance has generated a gauge field", although this is not true. Imposing local gauge invariance simply puts a constraint on what sort of Lagrangians can be written. It is similar to demanding that a function F(z) be analytic in the complex plane, this also has serious consequences. MartinMartin Gauge symmetry imposes local conservation laws, which are called Ward Identities in QED and Slavnov-Taylor identities for non-Abelian gauge theories. Those identities relate amplitudes or limit them. An example of those constraints imposed by gauge symmetry is the transversality of the vacuum polarization. To be more precise, gauge symmetry does not allow for a mass term for a photon on the Lagrangian. Yet, this could develop through quantum fluctuations. This is not happening due to the Ward identity that imposes transversality of the photon vacuum polarization. Another example is the relation between fermion propagator and the basic vertex in QED. It guarantees the absence of longitudinal photons. The idea is thus that gauge symmetry does impose a sort of Noether theorem, but in much more refined way. It shows up at the level of quantum corrections and limits them. These relations are, furthermore, local. They become a sort of local version of Noether theorem. José Ignacio LatorreJosé Ignacio Latorre protected by Qmechanic♦ May 21 '15 at 7:39 Not the answer you're looking for? Browse other questions tagged quantum-field-theory particle-physics gauge-theory research-level topological-order or ask your own question. What, in simplest terms, is gauge invariance? What role does "spontaneously symmetry breaking" played in the "Higgs Mechanism"? Noether's theorem and gauge symmetry Why do we seek to preserve gauge symmetries after quantization? What is spontaneous symmetry breaking in QUANTUM GAUGE systems? Physical difference between gauge symmetries and global symmetries Understanding Elitzur's theorem from Polyakov's simple argument? Can we do path integrals in gauge theories without fixing a gauge? Why does charge conservation due to gauge symmetry only hold on-shell? What is the relationship between string net theory and string / M-theory? Gauge redundancies and global symmetries Gauge invariant scalar potentials Gauge invariance and the form of the Rarita-Schwinger action Gauge symmetry description for $\phi^4$? Counting degrees of freedom for gravitational waves as a gauge field Anomalies for not-on-site discrete gauge symmetries Large gauge transformations for higher p-form gauge fields Why am I wrong about how to view gauge theory? What is the relationship between BRST symmetry and gauge symmetry? Higgs-Mechanism: Why are gauge boson masses not protected by gauge symmetry
CommonCrawl
Prove that $ n < 2^{n}$ for all natural numbers $n$. [duplicate] Proof by induction - Being stuck before being able to prove anything! 3 answers Prove that $ n < 2^{n} $ for all natural numbers $n$. I tried this with induction: Inequality clearly holds when $n=1$. Supposing that when $n=k$, $k<2^{k}$. Considering $k+1 <2^{k}+1$, but where do I go from here? Any other methods maybe? inequality induction Start wearing purple IKXZNJIKXZNJ marked as duplicate by GNUSupporter 8964民主女神 地下教會, Claude Leibovici, Chris Custer, Wouter, Ethan Bolker May 11 '18 at 12:15 $\begingroup$ no instead of $2^k+1$,write $2^{k+1}$ $\endgroup$ – dato datuashvili Jul 22 '13 at 18:32 $\begingroup$ Note, your answer was almost complete. Just note that $2^k\geq 1$ and therefore $$2^{k+1} = 2^k + 2^k \geq 2^k +1 > k+1$$ $\endgroup$ – Thomas Andrews Jul 22 '13 at 19:23 Proof by induction. Let $n \in \mathbb{N}$. Step $1.$: Let $n=1$ $\Rightarrow$ $n\lt2^{n}$ holds, since $ 1\lt 2$. Step $2.$: Assume $ n \lt 2^{n}$ holds where $n=k$ and $k \geq 1$. Step $3.$: Prove $n \lt 2^{n}$ holds for $n = k+ 1$ and $ k\geq 1$ to complete the proof. $k \lt 2^{k}$, using step $2$. $2\times k \lt 2\times2^{k}$ $ 2k \lt 2^{k+1}\quad(1)$ On the other hand, $k \gt 1 \Rightarrow k + 1 \lt k+k = 2k$. Hence $k+1\lt2k\quad(2)$ By merging results (1) and (2). $k + 1 \lt 2k \lt 2^{k+1}$ $k + 1 \lt 2^{k+1}$ Hence, $ n \lt 2^{n}$ holds for all $ n \in \mathbb{N}$ Thomas Andrews $\begingroup$ you can actually start at $n=0$, depending on the OPs definition of natural numbers, but then this proof won't work since $2n\geq n+1$ is not true for $n=0$. It's more general to go from $2^{n+1}=2^n + 2^n \geq 1+2^n > 1+n$. $\endgroup$ – Thomas Andrews Jul 22 '13 at 19:21 $\begingroup$ That is true but when OP mentioned his/her attempt he/she started his/her induction at $n=1$. So he/she has assumed a definition that does not include $0$. Or perhaps neglected to mention his/her $n=0$ case. $\endgroup$ – user71352 Jul 22 '13 at 19:27 Counting argument: Let $S$ be a set with $n$ elements. There are $2^n$ subsets of $S$. There are $n$ singleton subsets of $S$. There is at least one non-singleton subset of $S$, the empty subset. Thomas AndrewsThomas Andrews $\begingroup$ i'm sorry, but doesn't that beg the question, i.e. why each set has $2^n$ subsets? so you have to complete this proof (it is clear of course how to do so, I just wanted to add this for the sake of completeness)... $\endgroup$ – W_D Jul 22 '13 at 18:48 $\begingroup$ Sure, if you don't know what $2^n$ counts, then you can't use this proof. But this proof doesn't "beg the question" because I don't assume what is to be proven, I assume some other theorem. :) @AlexWhite $\endgroup$ – Thomas Andrews Jul 22 '13 at 18:51 Note that $\displaystyle 2^n=(1+1)^n=1+\sum_{k=1}^{n}\binom{n}{k}>\binom{n}{1}=n$ holds for all $n\in \mathbb{N}$. Pedro Tamaroff♦ Samrat MukhopadhyaySamrat Mukhopadhyay Since no-one's posted it yet: This is of course a special case of Cantor's theorem: for any cardinal number $n$, $n<2^n$, and so in particular it's true for all finite cardinals (aka naturals). Chris EagleChris Eagle $\begingroup$ The only issue with this approach is that to match what the OP meant by $2^n$, you need to prove that $$|2^n| = |\underbrace{2\times 2\times\cdots \times 2}_{\text{$n$ times}}|.$$ $\endgroup$ – dfeuer Jul 23 '13 at 0:27 You can also prove this using the derivative. Since $n<2^n$ for $n=1$, and moreover: $$1 < \log 2 \cdot 2^n$$ For all $n>1\in \mathbb{R}$, $n < 2^n$ for the same. nbubisnbubis $\begingroup$ to be complete don't forget to state that both functions are monotonous positive $\endgroup$ – kriss Jul 23 '13 at 8:56 Hint:$$\large{1+z+z^2+\ldots+z^n=\dfrac{z^{n+1}-1}{z-1}}$$ $$2^n=(2^n-1)+1=(1+2+2^1+\ldots+2^{n-1})(2-1)+1\gt\underbrace{(1+1+\ldots+1)}_{n-1}+1=n$$ M.HM.H $\begingroup$ Since you asked, it is not a wrong solution, but the power series formula seems like a big stick to apply here, somehow. It is like my counting argument in that it references another result, but the counting argument feels like it gives an external intuition for the result, while this proof just seems to make it more complicated. In particular, any times you use $\dots$ in an expression for beginning proof theory, you are hiding an induction. My proof hides induction too, but it is perhaps a more intuitive case. $\endgroup$ – Thomas Andrews Jul 22 '13 at 19:31 To add yet another answer, let us use AM/GM inequality. For $n\geq1$ one has $$\frac{2^{n}-1}{n}=\frac{2^0+2^1+\ldots +2^{n-1}}{n}\geq \left(2^{0+1+\ldots+(n-1)}\right)^{\frac1n}=2^{\frac{n-1}{2}}\geq1,$$ and therefore $2^n-n\geq 1$. Start wearing purpleStart wearing purple If we assume $2^k>k$ $2^{k+1}=2\cdot 2^k>2\cdot k$ which we need to be $\ge k+1\iff k\ge 1$ lab bhattacharjeelab bhattacharjee We need to prove the claim true for $n=k+1$, where $k\ge1$. That is, we need to prove that $k+1<2^{k+1}$. Observe that: $$ \begin{align*} k+1&<2^k+1 & \text{by the induction hypothesis} \\ &<2^k+2 \\ &=2^k+2^1 \\ &\le2^k+2^k & \text{since } 1\le k \\ &= 2(2^k) \\ &= 2^{k+1} \end{align*} $$ as desired. AdrianoAdriano Here's your proof: … just kidding of course … kind of. Well, you can actually easily show that the derivative $\frac{d}{dx}2^x=2^x \log(2)$ is greater than 1 for all $x\geq1$ (the break-even point is somewhere around 0.528766) and since 1 is the derivative of $f(x)=x$ of course, we just need to show that $2^x>x$ for $x=1$, i.e. that $2^1>1$ and we can deduce that this will always be the case because the gradient is always greater for $2^x$ than for $x$. And since it is true for all real numbers $\geq 1$ it's of course also true for the natural numbers. Uncountably infinite overkill if you will but still an easy proof. You can also go on to prove that $2^x>x$ for all real numbers. For $x$ smaller than the above-mentioned break-even point of $x=-\frac{\log(\log(2))}{\log(2)}\approx 0.528766$ the above argument is true just in reverse. The gradient of $x$ will always be greater than that of $2^x$. For $x=-\frac{\log(\log(2))}{\log(2)}$ itself it's a matter of a simple calculation to show that $2^x>x$ since $2^{-\frac{\log(\log(2))}{\log(2)}}=\frac{1}{\log(2)}\approx 1.442695$. Again, Wolfram Alpha has a nice visualization for this. So tl;dr of this: at no point are the two functions closer than for $x=-\frac{\log(\log(2))}{\log(2)}\approx 0.528766$ (this means especially that they do not cross) and even there $2^x>x$. ChristianChristian Well $2^k + 1 < 2^{k+1}$ for $k \geq 1$ so $k + 1 < 2^k + 1 < 2^{k+1}$. Michael AlbaneseMichael Albanese Assume there is $n$ kind of fruit and you can choose one of each; so you have $2^n$ options, if you can only choose one fruit of all, you will have $n$ option, In which scenario you have more option?! SadeghSadegh $\begingroup$ I don't understand this answer as written, at least the $2^n$ part. For the $n$ part, if you have $n$ different fruits, and you choose exactly one of them, there are $n$ options, I agree. On the other hand, if you have $n$ different fruits, and for each fruit, you either select it or do not select it, then there are $2^n$ options for the sets of selected fruits. Is that what is meant? This is similar to Thomas's answer. $\endgroup$ – Jonas Meyer Jul 23 '13 at 7:02 $\begingroup$ @jonasMeyer Yes, that exactly what I meant, $\endgroup$ – Sadegh Jul 23 '13 at 7:09 Not the answer you're looking for? Browse other questions tagged inequality induction or ask your own question. Prove by mathematical induction: $n < 2^n$ Proof by induction - Being stuck before being able to prove anything! Prove by induction that $n < 2^n$ for all $n \ge 1$ Prove by induction that $n < 2 ^n $ where $n \in \mathbb{N}$ Prove by induction $4^n > n^2$ for $n \geq 1$ Is $2^n + n < 2^{(n+1)}$? Prove by induction $2^n > 2n+1$ for all $n \geq 3$ Prove using induction : $n < 3^n$ Integer solutions to $x^{x-2}=y^{x-1}$ Induction proof $2n+1<2^n$ Show by induction that $n^3 \leq 3^n$ for all natural numbers n. Proving an inequality for all natural numbers involving parameters Prove by induction that (5^(n))-1 is divisible by 4 for all natural numbers n. Property for the natural numbers. prime numbers and natural numbers Use an induction argument to prove that for any natural number $n$, the interval $(n,n+1)$ does not contain any natural number. Inequality, prove for all natural numbers $n \geq 2$, possibly induction Show that for all Harmonic numbers $H(k)$, the inequality $H(2^k) \leq 1 + k$ holds for all natural numbers. Prove by induction that $3^n > n^2$ Prove that every natural number >1 has a unique way of prime factorization
CommonCrawl
\begin{document} \title{Unbounded towers and the Michael line topology} \begin{abstract} A topological space satisfies $\GNga$ (also known as Gerlits--Nagy's property $\gamma$) if every open cover of the space such that each finite subset of the space is contained in a member of the cover, contains a point-cofinite cover of the space. A topological space satisfies $\ctblga$ if in the above definition we consider countable covers. We prove that subspaces of the Michael line with a special combinatorial structure have the property $\ctblga$. Then we apply this result to products of sets of reals with the property $\GNga$. The main method used in the paper is coherent omission of intervals invented by Tsaban. \end{abstract} \section{Introduction} By \emph{space} we mean a topological space. A \emph{cover} of a space is a family of proper subsets of the space whose union is the entire space. An \emph{open} cover is a cover whose members are open subsets of the space. An \emph{$\omega$-cover }is an open cover such that each finite subset of the space is contained in a set from the cover. A \emph{$\gamma$-cover} is an infinite open cover such that each point of the space belongs to all but finitely many sets from the cover. Given a space, let $\Omega$, $\ctblOm$, $\Gamma$ be the families of $\omega$-covers, countable $\omega$-covers and $\gamma$-covers, respectively. For families $\cA$ and $\cB$ of covers of a space, the property that every cover in the family $\cA$ has a subcover in the family $\cB$ is denoted by $\binom{ \ \cA \ }{\cB}$. The property $\mathsf{S}_1(\cA, \cB)$ means that for each sequence $\cU_1, \cU_2,\dotsc \in\cA$ there are sets $U_1\in\cU_1, U_2\in\cU_2,\dotsc$ such that $\sset{U_n}{n\in\bbN}\in\cB$. Let $\roth$ be the set of infinite subsets of $\bbN$ and $\Fin$ be the set of finite subsets. For sets $a, b\in\roth$ we say that $a$ is \textit{almost subset} of $b$, denoted $a\sub^*b$, it the set $a\sm b$ is finite. A \textit{pseudointersection} of a family of infinite sets is an infinite sets $a$ with $a\sub^*b$ for all sets $b$ in the family. A family of infinite sets is \textit{centered }if the finite intersections of its elements, are infinite. Let $\fp$ be the minimal cardinality of a subfamily of $\roth$ that is centered and has no pseudointersecion. \bdfn[{\cite[Definition~2.2]{unbddtower}}] Let $\kappa$ be an uncountable ordinal number. A set $X\sub \roth$ with $|X|\geqslant \kappa$ is a $\kappa$-\emph{\emph{generalized tower}} if for each function $a\in\roth$, there are sets $b\in\roth$ and $S\sub X$ with $|S|<\kappa$ such that $$ x\cap \Un_{n\in b} [a(n), a(n+1))\in\Fin$$ for all sets $x\in X\sm S$. \edfn Let $\kappa$ be an uncountable ordinal number. A set $X\cup\Fin$ is \emph{$\kappa$-generalized tower set} if the set $X$ is a $\kappa$-generalized tower. The \emph{Michael line} is the set $\PN$, with the topology where the points of the set $\roth$ are isolated, and the neighborhoods of the points of the set $\Fin$ are those induced by the Cantor space topology on $\PN$. \blem[{\cite[Lemma~1.2.]{miller}}]\label{Fin} Let $\cU$ be a family of open sets in $\PN$ such that $\cU\in\Omega(\Fin)$. There are a function $a\in\roth$ and sets $U_1, U_2, \dotsc\in\cU$ such that for each set $x\in \roth$ and all natural numbers $n$: \[ \text{If } x\cap [a(n), a(n+1))=\emptyset, \text{ then } x\in U_n. \] \elem For a set $U\sub\PN$, let $\Int (U)$ be the interior of the set $U$ in the Cantor space topology on $\PN$. If $\cU\in\Omega(\Fin)$ is a family of open sets in $\PN$ with the Michael line topology, then $\sset{\Int (U)}{U\in\cU}\in\Omega(\Fin)$. Thus Lemma~\ref{Fin} holds for a family of open sets with the Michael line topology. \section{Main result} For functions $f,g\in\NN$ let $(f\circ g)\in\NN$ be a function such that $(f\circ g)(n):=f(g(n))$ for all natural numbers $n$. \bthm\label{thm1} Let $X\cup\Fin$ be a $\fp$-generalized tower set with the Michael line topology. The space $X\cup\Fin$ satisfies $\ctblga$. \ethm \bpf Let $\cU\in\ctblOm (X\cup\Fin)$ be a family of open sets in $\PN$ with the Michael line topology. Let $S_1:=\Fin$. Fix a natural number $k>1$, and assume that the set $S_{k-1}\sub X\cup\Fin$ with $\Fin\sub\S_{k-1}$ and $|S_{k-1}|<\fp$ has been already defined. Since $|S_{k-1}|<\fp$, there is $\cV\sub\cU$ such that $\cV\in\Gamma(S_{k-1})$. By Lemma~\ref{Fin}, there are a function $a_k\in[\bbN]^{\infty}$ and sets $U_1^{(k)}, U_2^{(k)},\dotsc\in\cV$ such that for each set $x\in\roth$ and all natural numbers $n$: \beq \text{If }x\cap [a_k(n), a_k(n+1))=\emptyset, \text{ then } x\in U_n^{(k)}. \tag{$~\ref{thm1}$.$1$} \eeq Since the set $X$ is $\fp$-generalized tower, there are a set $b_k\in\roth$ and a set $S_k\sub X\cup\Fin$ with $S_{k-1}\sub S_k$ and $|S_k|<\fp$ such that $$ x\cap \bigcup_{n\in b_k} [a_k(n), a_k(n+1))\in\Fin$$ for all sets $x\in X\sm S_k$. Then $$ \sset{U_{b_k(j)}^{(k)}}{j\in\bbN}\in\Gamma((X\sm S_k)\cup S_{k-1}) .$$ There is a function $a\in\roth$ such that for each natural number $k$, we have $$ |(a_k\circ b_k)\cap [a(n), a(n+1))|\geqslant 2,$$ for all but finitely many natural numbers $n$. Since the set $X$ is $\fp$-generalized tower, there are a set $b\in\roth$ and a set $S\sub X$ with $|S|<\fp$ such that \beq x\cap \bigcup_{n\in b} [a(n), a(n+1))\in\Fin \tag{$~\ref{thm1}$.$2$} \eeq for all sets $x\in X\sm S$. We may assume that $\bigcup_{k} S_k\sub S$. The sets \beq c_k:=\sset{i\in b_k}{[a_k(i), a_k(i+1))\sub \bigcup_{n\in b}[a(n), a(n+1))} \tag{$~\ref{thm1}.3$} \eeq are infinite for all natural numbers $k$. Thus, $$ \sset{U^{(k)}_{c_k(j)}}{j\in\bbN} \in \Gamma((X\sm S_k)\cup S_{k-1}).$$ Since the sequence of the sets $S_k$ is increasing, we have $X = \Un_k (X\sm S_k)\cup S_{k-1}$ and each point of $X$ belongs to all but finitely many sets $(X\sm S_k)\cup S_{k-1}$. For each point $x\in S$, define $$ g_x(k):= \begin{cases} 0 & x\notin (X\sm S_k)\cup S_{k-1},\\ \min\sset{j}{x\in\bigcap_{i\geqslant j} U^{(k)}_{c_k(i)}} & x\in (X\sm S_k)\cup S_{k-1}. \end{cases}$$ Since $|S|<\fp$, there is a function $g\in\bbN^{\bbN}$ with $\sset{g_x}{x\in S}\leqslant^* g$ and \beq a_k(c_k(g(k)+1))<a_{k+1}(c_{k+1}(g(k+1))) \tag{$~\ref{thm1}.4$} \eeq for all natural numbers $k$. Let $$ \cW_k:=\sset{U^{(k)}_{c_k(j)}}{j\geqslant g(k)}$$ for all natural numbers $k$. Then $\cW_1, \cW_2, \dotsc\in\Gamma(S)$. We may assume that families $\cW_k$ are pairwise disjoint. Since properties $\ctblga$ and $\mathsf{S}_1(\ctblOm, \Gamma)$ are equivalent, the set $S$ satisfies $\mathsf{S}_1(\ctblOm, \Gamma)$. Then there is a function $h\in\bbN^{\bbN}$ such that $g\leqslant h$ and $$ \sset{U^{(k)}_{c_k(h(k))}}{k\in\bbN}\in\Gamma(S).$$ Fix a set $x\in X\sm S$. By ($~\ref{thm1}.3$), for each natural number $k$, we have $$ \Un_{n\in c_k}[a_k(n), a_k(n+1))\sub \Un_{n\in b} [a(n), a(n+1)).$$ By ($~\ref{thm1}.2$), ($~\ref{thm1}.4$) and the fact that $g\leqslant h$, the set $x$ omits all but finitely many intervals $$ [a_k(c_k(h(k))), a_k(c_k(h(k))+1) ).$$ By ($~\ref{thm1}.1$), we have $$ \sset{U^{(k)}_{c_k(h(k))}}{k\in\bbN}\in\Gamma(X\sm S)).$$ Then $$ \sset{U^{(k)}_{c_k(h(k))}}{k\in\bbN}\in\Gamma(X\cup\Fin).$$ \epf \section{Applications} For spaces $X$ and $Y$, let $X\sqcup Y$ be the \emph{disjoint union} of these spaces. Let $X$ be a space satisfying $\ctblga$. Then the space $X\sqcup X$ satisfies $\ctblga$. In the realm of sets of reals, the properties $\ctblga$ and $\GNga$ are equivalent. \blem [{\cite[Proposition~2.3.]{miller}}]\label{m} If $X, Y$ are sets of reals then the space $X\x Y$ satisfies $\GNga$ if and only if the space $X\sqcup Y$ satisfies $\GNga$. \elem From our main result we can obtain the following corollary which has originally was proved by Szewczak and W\l udecka~\cite[Theorem~4.1.(1)]{unbddtower}. \bcor Let $n\in\bbN$ and $X_1\cup \Fin,\dotsc, X_n\cup\Fin$ be $\fp$-generalized tower sets with the Cantor topology. Then the space $(X_1\cup\Fin)\x\dotsb\x(X_n\cup\Fin)$ satisfies $\ctblga$. \ecor \bpf We prove the statement for $n=2$. The proof for other $n$ is similar. Let $X, Y$ be $\fp$-generalized towers in $\roth$. Then $X\cup Y$ is a $\fp$-generalized tower. By Theorem~\ref{thm1}, the space $X\cup Y\cup \Fin$ with the Michael line topology satisfies $\ctblga$. Then the space $(X\cup Y\cup \Fin)\sqcup (X\cup Y\cup \Fin)$ satisfies $\ctblga$. Since the property $\ctblga$ is hereditary for closed subset, thus the space $(X\cup\Fin)\sqcup (Y\cup\Fin)$ with the Michael line topology satisfies $\ctblga$. Then $(X\cup\Fin)\sqcup (Y\cup\Fin)$ with the Cantor topology satisfies $\ctblga$, and $\GNga$, too. By Lemma~\ref{m}, the space $(X\cup \Fin)\x (Y\cup \Fin)$ with the Cantor topology satisfies $\GNga$. \epf \begin{thebibliography}{99} \Pa{arch}{A. Arhangel'ski\u{\i}}{The frequency spectrum of a topological space and the classification of spaces}{Soviet Math. Dokl.}{13}{1972}{1185}{1189} \Pa{arch2}{A. Arhangel'ski\u{\i}}{Hurewicz spaces, analytic sets and fan tightness of function spaces}{Soviet Mathematics Doklady}{33}{1986}{396}{399} \bibitem{BarJu} T.~Bartoszy\'nski, H.~Judah, Set Theory: On the structure of the real line, A. K. Peters, Massachusetts: 1995. \Pa{BaTs}{T. Bartoszy\'nski, B. Tsaban}{Hereditary topological diagonalizations and the Menger--Hurewicz Conjectures}{Proceedings of the American Mathematical Society}{134}{2006}{605}{615} \bibitem{blass} A. Blass, \emph{Combinatorial cardinal characteristics of the continuum}, in: \textbf{Handbook of Set Theory} (M. Foreman, A. Kanamori, eds.), Springer, 2010, 395--489. \Pa{QN}{L. Bukovsk\'y, J. Hale\v{s}}{QN-spaces, wQN-spaces and covering properties}{Topology and its Applications}{154}{2007}{848}{858} \Pa{wQN}{L. Bukovsk\'y}{On $\mathrm{wQN}_*$ and $\mathrm{wQN}^*$ spaces}{Topology and its Applications}{156}{2008}{24}{27} \Pa{brp}{L. Bukovsk\'{y}, I. Rec\l{}aw, M. Repick\'y}{Spaces not distinguishing convergences of real-valued functions}{Topology and its Applications}{112}{2001}{13}{40} \Pa{gn}{J. Gerlits, Zs. Nagy}{Some properties of $\Cp(X)$, I}{Topology and its Applications}{14}{1982}{151}{161} \Pa{gami}{F. Galvin, A. Miller}{$\gamma$-sets and other singular sets of real numbers}{Topology and its Applications}{17}{1984}{145}{155} \Pa{hales}{J. Hale\v{s}}{On Scheepers' conjecture}{Acta Universitatis Carolinae. Mathematica et Physica}{46}{2005}{27}{31} \Pa{coc2}{W. Just, A. Miller, M. Scheepers, P. Szeptycki}{The combinatorics of open covers II}{ Topology and its Applications}{73}{1996}{241}{266} \Pa{Laver}{R. Laver}{On the consistency of Borel's conjecture}{Acta Mathematicae}{137}{1976}{151}{169} \bibitem{miller} A.~Miller, \emph{A hodgepodge of sets of reals}, Note di Matematica \textbf{27} (2007), suppl. 1, 25--39. \Pa{BBC}{A. Miller, B. Tsaban}{Point-cofinite covers in Laver's model}{Proceedings of the American Mathematical Society}{138}{2010}{3313}{3321} \Pa{gamma}{A. Miller, B. Tsaban, L. Zdomskyy}{Selective covering properties of product spaces, II: $\gamma$ spaces}{Transactions of the American Mathematical Society}{368}{2016}{2865}{2889} \Pa{ot}{T. Orenshtein, B. Tsaban}{Linear $\sigma$-additivity and some applications}{Transactions of the American Mathematical Society}{363}{2011}{3621}{3637} \bibitem{sss}A. Osipov, P. Szewczak, B. Tsaban, \emph{Strongly sequentially separable function spaces, via selection principles}, Topology and its Applications, \textbf{270} (2020), 106942. \bibitem{sakai} M.~Sakai, \emph{Property C'' and function spaces}, Proceedings of the American Mathematical Society \textbf{104} (1988), 917--919. \Pa{sakaiCp}{M. Sakai}{The sequence selection properties of $\Cp(X)$}{Topology and its Applications}{154}{2007}{552}{560} \Pa{sakaisemcont}{M. Sakai}{Selection principles and upper semicontinuous functions}{Colloquium Mathematicum}{117}{2009}{251}{256}. \bibitem{SakaiScheepersPIT} M. Sakai, M. Scheepers, \emph{The combinatorics of open covers}, in: \textbf{Recent Progress in General Topology III} (K. Hart, J. van Mill, P. Simon, eds.), Atlantis Press, 2014, 751--800. \Pa{coc1}{M. Scheepers}{Combinatorics of open covers. I: Ramsey theory}{Topology and its Applications}{69}{1996}{31}{62} \Pa{SchCp}{M. Scheepers}{Sequential convergence in $\Cp(X)$ and a covering property}{East-West Journal of Mathematics}{1}{1999}{207}{214} \Pa{alpha_i}{M. Scheepers}{$\Cp(X)$ and Arhangel'ski\u{\i}'s $\alpha_i$ spaces}{Topology and its Applications}{89}{1998}{265}{275} \Pa{CBC}{M. Scheepers, B. Tsaban}{The combinatorics of Borel covers}{Topology and its Applications}{121}{2002}{357}{382} \bibitem{ST} P.~Szewczak, B.~Tsaban, \emph{Products of Menger spaces: A combinatorial approach}, Annals of Pure and Applied Logic \textbf{168} (2017), 1--18. \Pa{unbddtower}{P. Szewczak, M. Włudecka}{Unbounded towers and products}{Annals of Pure and Applied Logic}{172}{2021}{102900} \bibitem{add} B.~Tsaban, \emph{Additivity numbers of covering properties}, in: \textbf{Selection Principles and Covering Properties in Topology} (L.~Kocinac, editor), Quaderni di Matematica 18, Seconda Universita di Napoli, Caserta 2006, 245--282. \bibitem{MHP} B. Tsaban, \emph{Menger's and Hurewicz's Problems: Solutions from ``The Book'' and refinements}, Contemporary Mathematics \textbf{533} (2011), 211--226. \Pa{sfh}{B. Tsaban, L. Zdomskyy}{Scales, fields, and a problem of Hurewicz}{Journal of the European Mathematical Society}{10}{2008}{837}{866} \Pa{TsZdArh}{B. Tsaban, L. Zdomskyy}{Hereditarily Hurewicz spaces and Arhangel'ski\u{\i} sheaf amalgamations}{Journal of the European Mathematical Society}{12}{2012}{353}{372} \end{thebibliography} \end{document}
arXiv
\begin{document} \title{checkmate: Fast Argument Checks for Defensive \R{} \abstract{ Dynamically typed programming languages like \texttt{R}{} allow programmers to write generic, flexible and concise code and to interact with the language using an interactive Read-eval-print-loop (REPL). However, this flexibility has its price: As the \texttt{R}{} interpreter has no information about the expected variable type, many base functions automatically convert the input instead of raising an exception. Unfortunately, this frequently leads to runtime errors deeper down the call stack which obfuscates the original problem and renders debugging challenging. Even worse, unwanted conversions can remain undetected and skew or invalidate the results of a statistical analysis. As a resort, assertions can be employed to detect unexpected input during runtime and to signal understandable and traceable errors. The package \CRANpkg{checkmate} provides a plethora of functions to check the type and related properties of the most frequently used \texttt{R}{} objects and variable types. The package is mostly written in C to avoid any unnecessary performance overhead. Thus, the programmer can conveniently write concise, well-tested assertions which outperforms custom \texttt{R}{} code for many applications. Furthermore, checkmate simplifies writing unit tests using the framework~\CRANpkg{testthat}~\citep{wickham_2011} by extending it with plenty of additional expectation functions, and registered C routines are available for package developers to perform assertions on arbitrary SEXPs (internal data structure for \texttt{R}{} objects implemented as struct in C) in compiled code. } \section{Defensive Programming in \texttt{R}{}} \label{sec:introduction} Most dynamic languages utilize a weak type system where the type of variable must not be declared, and \texttt{R}{} is no exception in this regard. On the one hand, a weak type system generally reduces the code base and encourages rapid prototyping of functions. On the other hand, in comparison to strongly typed languages like C/C++, errors in the program flow are much harder to detect. Without the type information, the \texttt{R}{} interpreter just relies on the called functions to handle their input in a meaningful way. Unfortunately, many of \texttt{R}{}'s base functions are implemented with the REPL in mind. Thus, instead of raising an exception, many functions silently try to auto-convert the input. E.g., instead of assuming that the input \code{NULL} does not make sense for the function \code{mean()}, the value \code{NA} of type numeric is returned and additionally a warning message is signaled. While this behaviour is acceptable for interactive REPL usage where the user can directly react to the warning, it is highly unfavorable in packages or non-interactively executed scripts. As the generated missing value is passed to other functions deeper down the call stack, it will eventually raise an error. However, the error will be reported in a different context and associated with different functions and variable names. The link to origin of the problem is missing and debugging becomes much more challenging. Furthermore, the investigation of the call stack with tools like \code{traceback()} or \code{browser()} can result in an overwhelming number of steps and functions. As the auto-conversions cascade nearly unpredictably (as illustrated in Table~\ref{tab:ex_base_funs}), this may lead to undetected errors and thus to misinterpretation of the reported results. \begin{table}[ht] \footnotesize \centering \begin{tabular}{l|cccc}\toprule & \multicolumn{4}{c}{Return value of} \\ Input & \code{mean(x)} & \code{median(x)} & \code{sin(x)} & \code{min(x)} \\ \midrule \code{numeric(0)} & \code{NaN} & \code{NA} & \code{numeric(0)} & \code{Inf} (w) \\ \code{character(0)} & \code{NA\_real\_} (w) & \code{NA\_character\_} & [exception] & \code{NA\_character\_} (w) \\ \code{NA} & \code{NA\_real\_} & \code{NA} & \code{NA\_real\_} & \code{NA\_integer\_} \\ \code{NA\_character\_} & \code{NA\_real\_} (w) & \code{NA\_character\_} & [exception] & \code{NA\_character\_} \\ \code{NaN} & \code{NaN} & \code{NA} & \code{NaN} & \code{NaN} \\ \code{NULL} & \code{NA} (w) & \code{NULL} (w) & [exception] & \code{Inf} (w) \\ \bottomrule \end{tabular} \caption{Input and output for some simple mathematical functions from the \code{base} package (\texttt{R}{}-3.3.1). Outputs marked with \enquote{(w)} have issued a warning message. }\label{tab:ex_base_funs} \end{table} The described problems lead to a concept called \enquote{defensive programming} where the programmer is responsible for manually checking function arguments. Reacting to unexpected input as soon as possible by signaling errors instantaneously with a helpful error message is the key aspect of this programming paradigm. A similar concept is called \enquote{design by contract} which demands the definition of formal, precise and verifiable input and in return guarantees a sane program flow if all preconditions hold. The package \pkg{checkmate} assists the programmer in writing such assertions in a concise way for the most important \texttt{R}{} objects. \section{Related work} \label{sec:related_work} Many packages contain custom code to perform argument checks. These either rely on (a) the base function \code{stopifnot()} or (b) hand-written cascades of \code{if-else} blocks containing calls to \code{stop()}. Option (a) can be considered a quick hack because the raised error messages lack helpful details or instructions for the user. Option (b) is the natural way of doing argument checks in \texttt{R}{} but quickly becomes tedious. For this reason many packages have their own functions included, but there are also some packages on CRAN whose sole purpose are argument checks. The package~\CRANpkg{assertthat}~\citep{wickham_2013} provides the \enquote{drop-in replacement} \code{assert\_that()} for \texttt{R}{}'s \code{stopifnot()} while generating more informative help messages. This is achieved by evaluating the expression passed to the function \code{assert\_that()} in an environment where functions and operators from the base package (e.g.\, \code{as.numeric()} or \code{`==`}) are overloaded by more verbose counterparts. E.g., to check a variable to be suitable to pass to the \code{log()} function, one would require a numeric vector with all positive elements and no missing values: \begin{verbatim} assert_that(is.numeric(x), length(x) > 0, all(!is.na(x)), all(x >= 0)) \end{verbatim} Furthermore, \pkg{assertthat} offers some additional convenience functions like \code{is.flag()} to check for single logical values or \code{has\_name()} to check for presence of specific names. These functions also prove useful if used with \code{see\_if()} instead of \code{assert\_that()} which turns the passed expression into a predicate function returning a logical value. The package \CRANpkg{assertive}~\citep{cotton_2016} is another popular package for argument checks. Its functionality is split over 15~packages containing over 400~functions, each specialized for a specific class of assertions: For instance, \CRANpkg{assertive.numbers} specialises on checks of numbers and \CRANpkg{asserive.sets} offers functions to work with sets. The functions are grouped into functions starting with \code{is\_} for predicate functions and functions starting with \code{assert\_} to perform \code{stopifnot()}-equivalent operations. The author provides a \enquote{checklist of checks} as package vignette to assist the user in picking the right functions for common situations like checks for numeric vectors or for working with files. Picking up the \code{log()} example again, the input check with \pkg{assertive} translates to: \begin{verbatim} assert_is_numeric(x) assert_is_non_empty(x) assert_all_are_not_na(x) assert_all_are_greater_than_or_equal_to(x, 0) \end{verbatim} Moreover, the package \CRANpkg{assertr}~\citep{fischetti_2016} focuses on assertions for \CRANpkg{magrittr}~\citep{bache_2014} pipelines and data frame operations in \CRANpkg{dplyr}~\citep{wickham_2016}, but is not intended for generic runtime assertions. \section{The \pkg{checkmate} Package} \label{sec:checkmate} \subsection{Design goals} \label{ssec:design} The package has been implemented with the following goals in mind: \begin{description} \item[Runtime] To minimize any concern about the extra computation time required for assertions, most functions directly jump into compiled code to perform the assertions directly on the SEXPs. The functions also have been extensively optimized to first perform inexpensive checks in order to be able to skip the expensive ones. \item[Memory] In many domains the user input can be rather large, e.g.\ long vectors and high dimensional matrices are common in text mining and bioinformatics. Basic checks, e.g.\ for missingness, are already quite time consuming, but if intermediate objects of the same dimension have to be created, runtimes easily get out of hand. For example, \code{any(x <\ 0)} with \code{x} being a large numeric matrix internally first allocates a logical matrix \code{tmp} with the same dimensions as \code{x}. The matrix \code{tmp} is then passed in a second step to \code{any()} which aggregates the logical matrix to a single logical value and \code{tmp} is marked to be garbage collected. Besides a possible shortage of available memory, which may cause the machine to swap or the R interpreter to terminate, runtime is wasted with unnecessary memory management. \pkg{checkmate} solves this problem by looping directly over the elements and thereby avoiding any intermediate objects. \item[Code completion] The package aims to provide a single function for all frequently used \texttt{R}{} objects and their respective characteristics and attributes. For example, the function \code{assertNumeric()} provides arguments to check for length, missingness and lower/upper bound. After typing the function name, the code completion of editors which speak \texttt{R}{} can suggest additional checks for the respective variable type. This context-sensitive assistance often helps writing more concise assertions. \end{description} \subsection{Naming scheme} \label{ssec:naming} The core functions of the package follow a specific naming scheme: The first part (prefix) of a function name determines the action to perform w.r.t.\ the outcome of the respective check while the second part of a function name (suffix) determines the base type of the object to check. The first argument of all functions is always the object~\code{x} to check and further arguments specify additional restrictions on \code{x}. \subsubsection{Prefixes} There are currently four families of functions, grouped by their prefix, implemented in \pkg{checkmate}: \begin{description} \item[assert*] Functions prefixed with \enquote{assert} throw an exception if the corresponding check fails and the checked object is returned invisibly on success. This family of functions is suitable for many different tasks. Besides argument checks of user input, this family of functions can also be used as a drop-in replacement for \code{stopifnot()} in unit tests using the internal test mechanism of \texttt{R}{} as described in Writing R Extensions \citep{rcoreteam_2016}, subsection 1.1.5. Furthermore, as the object to check is returned invisibly, the functions can also be used inside \pkg{magrittr} pipelines. \item[test*] Functions prefixed with \enquote{test} are predicate functions which return \code{TRUE} if the respective check is successful and \code{FALSE} otherwise. This family of functions is best utilized if different checks must be combined in a non-trivial manner or custom error messages are required. \item[expect*] Functions prefixed with \enquote{expect} are intended to be used together with \pkg{testthat}: the check is translated to an expectation which is then forwarded to the active \pkg{testthat} reporter. This way, \pkg{checkmate} extends the facilities of \pkg{testthat} with dozens of powerful helper functions to write efficient and comprehensive unit tests. Note that \pkg{testthat} is an optional dependency and the \code{expect}-functions only work if \pkg{testthat} is installed. Thus, to use \pkg{checkmate} as an \pkg{testthat} extension, \pkg{checkmate} must be listed in \code{Suggests} or \code{Imports} of a package. \item[check*] Functions prefixed with \enquote{check} return the error message as a string if the respective check fails, and \code{TRUE} otherwise. Functions with this prefix are the workhorses called by the \enquote{asssert}, \enquote{test} and \enquote{expect} families of functions and prove especially useful to implement custom assertions. They can also be used to collect error messages in order to generate reports of multiple check violations at once. \end{description} The prefix and the suffix can be combined in both \enquote{camelBack} and \enquote{underscore\_case} fashion. In other words, checkmate offers all functions with the \enquote{assert}, \enquote{test} and \enquote{check} prefix in both programming style flavors: \code{assert\_numeric()} is a synonym for \code{assertNumeric()} the same way \code{testDataFrame()} can be used instead of \code{test\_data\_frame()}. By supporting the two most predominant coding styles for \texttt{R}{}, most programmers can stick to their favorite style while implementing runtime assertions in their packages. \subsubsection{Suffixes} While the prefix determines the action to perform on a successful or failed check, the second part of each function name defines the base type of the first argument~\code{x}, e.g.\ \code{integer}, \code{character} or \code{matrix}. Additional function arguments restrict the object to fulfill further properties or attributes. \paragraph{Atomics and Vectors} The most important built-in atomics are supported via the suffixes \code{*Logical}, \code{*Numeric}, \code{*Integer}, \code{*Complex}, \code{*Character}, \code{*Factor}, and \code{*List} (strictly speaking, \enquote{numeric} is not an atomic type but a naming convention for objects of type \code{integer} or \code{double}). Although most operations that work on real values also are applicable to natural numbers, the contrary is often not true. Therefore numeric values frequently need to be converted to integer, and \code{*Integerish} ensures a conversion without surprises by checking double values to be \enquote{nearby} an integer w.r.t.\ a machine-dependent tolerance. Furthermore, the object can be checked to be a \code{vector}, an \code{atomic} or an atomic vector (a \code{vector}, but not \code{NULL}). All functions can optionally test for missing values (any or all missing), length (exact, minimum and maximum length) as well as names being \begin{enumerate*}[(a)] \item not present, \item present and not \code{NA}/empty, \item present, not \code{NA}/empty and unique, or \item present, not \code{NA}/empty, unique and additionally complying to \texttt{R}{}'s variable naming scheme. \end{enumerate*} There are more type-specific checks, e.g.\ bound checks for numerics or regular expression matching for characters. These are documented in full detail in the manual. \paragraph{Scalars} Atomics of length one are called scalars. Although \texttt{R}{} does not differentiate between scalars and vectors internally, scalars deserve particular attention in assertions as arguably most function arguments are expected to be scalar. Although scalars can also be checked with the functions that work on atomic vectors and additionally restricting to length~1 via argument \code{len}, \pkg{checkmate} provides some useful abbreviations: \code{*Flag} for logical scalars, \code{*Int} for an integerish value, \code{*Count} for a non-negative integerish values, \code{*Number} for numeric scalars and \code{*String} for scalar character vectors. Missing values are prohibited for all scalar values by default as scalars are usually not meant to hold data where missingness occurs naturally (but can be allowed explicitly via argument \code{na.ok}). Again, additional type-specific checks are available which are described in the manual. \paragraph{Compound types} The most important compound types are matrices/arrays (vectors of type logical, numeric or character with attribute \code{dim}) and data frames (lists with attribute \code{row.names} and class \code{data.frame} storing atomic vectors of same length). The package also includes checks for the popular \code{data.frame} alternatives \CRANpkg{data.table}~\citep{dowle_2014} and \CRANpkg{tibble}~\citep{wickham_2016-1}. Some checkable characteristics conclude the internal type(s), missingness, dimensions or dimension names. \paragraph{Miscellaneous} On top of the already described checks, there are functions to work with sets (\code{*Subset}, \code{*Choice} and \code{*SetEqual}), environments (\code{*Environment}) and objects of class \enquote{Date} (\code{*Date}). The \code{*Function} family checks \texttt{R}{} functions and its arguments and \code{*OS} allows to check if R is running on a specific operating system. The functions \code{*File} and \code{*Directory} test for existence and access rights of files and directories, respectively. The function \code{*PathForOutput} allows to check whether a directory can be used to store files in it. Furthermore, \pkg{checkmate} provides functions to check the class or names of arbitrary \texttt{R}{} objects with \code{*Class} and \code{*Names}. \paragraph{Custom checks} Extensions are possible by writing a \code{check*} function which returns \code{TRUE} on success and an informative error message otherwise. The exported functionals \code{makeAssertionFunction()}, \code{makeTestFunction()} and \code{makeExpectationFunction()} can wrap this custom check function to create the required counterparts in such a way that they seamlessly fit into the package. The vignette demonstrates this with a check function for square matrices. \subsection{DSL for argument checks} \label{ssec:dsl} Most basic checks can alternatively be performed using an implemented Domain Specific Language (DSL) via the functions \code{qassert()}, \code{qtest()} or \code{qexpect()}. All three functions have two arguments: The arbitrary object \code{x} to check and a \enquote{rule} which determines the checks to perform provided as a single string. Each rules consist of up to three parts: \begin{enumerate} \item The first character determines the expected class of \code{x}, e.g.\ \enquote{n} for numeric, \enquote{b} for boolean, \enquote{f} for a factor or \enquote{s} for a string (more can be looked up in the manual). By using a lowercase letter, missing values are permitted while an uppercase letter disallows missingness. \item The second part is the length definition. Supported are \enquote{?} for length~0 or length~1, \enquote{+} for length~$\geq 1$ as well as arbitrary length specifications like \enquote{1}/\enquote{==1} for exact length~1 or \enquote{<10} for length~$<10$. \item The third part triggers a range check, if applicable, in interval notation (e.g., \enquote{$[0,1)$} for values $0 \leq x < 1$). If the boundary value on an open side of the interval is missing, all values of \code{x} will be checked for being $>-\infty$ or $< \infty$, respectively. \end{enumerate} Although this syntax requires some time to familiarize with, it allows to write extensive argument checks with very few keystrokes. For example, the previous check for the input of \code{log()} translates to the rule \code{"N+[0,]"}. As the function signature is really simplistic, it is perfectly suited to be used from compiled code written in C/C++ to check arbitrary \code{SEXP}s. For this reason \pkg{checkmate} provides header files which foreign packages can link against. Instructions can be found in the package vignette. \section{Benchmarks} \label{sec:benchmarks} This small benchmark study picks up the $\log()$ example once again: testing a vector to be numeric with only positive, non-missing values. \subsection{Implementations} \label{ssec:implementations} Now we compare \pkg{checkmate}'s \code{assertNumeric()} and \code{qassert()} (as briefly described in the previous Section~\nameref{ssec:dsl}) with counterparts written with \texttt{R}{}'s \code{stopifnot()}, \pkg{assertthat}'s \code{assert\_that()} and a series of \pkg{assertive}'s \code{assert\_*()} functions: \begin{verbatim} checkmate <- function(x) { assertNumeric(x, any.missing = FALSE, lower = 0) } qcheckmate <- function(x) { qassert(x, "N[0,]") } R <- function(x) { stopifnot(is.numeric(x), all(!is.na(x)), all(x >= 0)) } assertthat <- function(x) { assert_that(is.numeric(x), all(!is.na(x)), all(x >= 0)) } assertive <- function(x) { assert_is_numeric(x) assert_all_are_not_na(x) assert_all_are_greater_than_or_equal_to(x, 0) } \end{verbatim} To allow measurement of failed assertions, the above functions are wrapped into a \code{try()}. The source code for this benchmark study is provided in the the supplementary. \subsection{Setup} The benchmark was performed on an Intel i5-6600 with \SI{16}{\giga\byte} running \texttt{R}{}-3.3.1 on a 64bit Arch Linux installation. The package versions are 1.8.2 for \pkg{checkmate}, 0.1 for \pkg{assertthat} and 0.3-4 for \pkg{assertive}. \texttt{R}{}, the linked OpenBLAS and all packages have been compiled with the GNU Compiler Collection (GCC) in version 6.2.1 and tuned with \code{march=native} on optimization level \code{-O2}. To compare runtime differences, \CRANpkg{microbenchmark}~\citep{mersmann_2015} is setup to do 100~replications. The wrappers have also been compared to their byte-compiled version (using \code{compiler::cmpfun}) with no notable difference in performance, thus the later presented results are extracted from the uncompiled versions of these wrappers. \subsection{Results} The benchmark is performed on four different inputs and the resulting timings are presented in Figure~\ref{fig:benchmark}. \begin{figure} \caption{Violin plots of the runtimes on $\log_{10}$-scale of the assertion \enquote{$x$ must be a numeric vector with all elements positive and no missing values} on different input \code{x}.} \label{fig:benchmark} \end{figure} Note that the runtimes on the $x$-axis are on $\log_{10}$-scale and use different units of measurement. \begin{description} \item[top left] Input \code{x} is a scalar character value, i.e.\ of wrong type. This benchmark serves as a measurement of overhead: the first performed and cheapest assertion on the type of \code{x} directly fails. In fact, all assertion frameworks only require microseconds to terminate. \texttt{R}{} directly jumps into compiled code via a \code{Primitive} and therefore has the least overhead. \pkg{checkmate} on the other hand has to jump into the compiled code via the \code{.Call} interface which is comparably slower. The implementation in \pkg{assertthat} is faster than \pkg{checkmate} (as it also primarily calls primitives) but slightly slower than \code{stopifnot()}. The implementation in \pkg{assertive} is the slowest. However, in case of assertions (in comparison to tests returning logicals), the runtimes for a successful check are arguably more important than for a failed check because the latter raises an exception which usually is a rare event in the program flow and thus is not time-critical. Therefore, the next benchmark might be more relevant for many applications. \item[top right] Input \code{x} is a scalar numeric value. The implementations now additionally check for missingness and negative values and do not raise an exception. \code{qassert()} is the fastest implementation, followed by \code{assertNumeric()}. Although \code{qassert()} and \code{assertNumeric()} basically call the same code internally, \code{qassert()} has less overhead due to its minimalistic interface. \texttt{R}{}'s \code{stopifnot()} is a tad slower comparing the median runtimes but still faster than \pkg{assertthat} (5x slowdown in comparison to \code{qassert()}). \pkg{assertive} is >60x slower than \code{qassert()}. \item[bottom left] Input \code{x} is now a long vector with $10^6$ numeric elements. \pkg{checkmate} has the fastest versions with a speedup of approximately 3.5x compared to \texttt{R}{}'s \code{stopifnot()} and \code{assert\_that()}. In comparison to its alternatives, \pkg{checkmate} avoids intermediate objects as described in \nameref{ssec:design}: Instead of allocating a \code{logical(1e6)} vector first to aggregate it in a second step, \pkg{checkmate} directly operates on the numeric input. That is also the reason why \code{stopifnot()} and \code{assertthat()} have high variance in their runtimes: The garbage collector occasionally gets triggered to free memory which requires a substantial amount of time. \pkg{assertive} is orders of magnitude slower for this input (>1200x) because it follows a completely different philosophy: Instead of focusing on speed, \pkg{assertive} gathers detailed information while performing the assertion. This yields report-like error messages (e.g., the index and reason why an assertion failed, for each element of the vector) but is comparably slow. \item[bottom right] Input \code{x} is again a large vector, but the first element is a missing value. Here, all implementations first successfully check the type of \code{x} and then throw an error about the missing value. Again, \pkg{checkmate} avoids allocating intermediate objects which in this case yields an even bigger speedup: While the other packages first check $10^6$ elements for missingness to create a \code{logical(1e6)} vector which is then passed to \code{any()}, \pkg{checkmate} directly stops after analyzing the first element of \code{x}. This obvious optimization yields a speedup of 20x in comparison to \texttt{R}{} and \pkg{assertthat} and a 6000x speedup in comparison to \pkg{assertive}. \end{description} Summed up, \pkg{checkmate} is the fastest option to perform expensive checks and only causes a small decrease in performance for trivial, inexpensive checks which fail quickly (top left). Although the runtime differences seem insignificant for small input (top right), the saved microseconds can easily sum up to seconds or hours if the respective assertion is located in a hot spot of the program and therefore is called millions of times. For large input, the runtime differences are often notable without benchmarks, and even become much more important as data grows bigger. \section{Conclusion} \label{sec:conclusion} Runtime assertions are a necessity in \texttt{R}{} to ensure a sane program flow, but \texttt{R}{} itself offers very limited capabilities to perform these kind of checks. \pkg{checkmate} allows programmers and package developers to write assertions in a concise way without unnecessarily sacrificing runtime performance nor increasing the memory footprint. Compared to the presented alternatives, assertions with \pkg{checkmate} are faster, tailored for bigger data and (with the help of code completion) more convenient to write. They generate helpful error messages, are extensively tested for correctness and suitable for large and extensive software projects (\CRANpkg{mlr}~\citep{bischl_2016} and \CRANpkg{BatchJobs}~\citep{bischl_2015} already make heavy use of \pkg{checkmate}). Furthermore, \pkg{checkmate} offers capabilities to do assertions on SEXPs in compiled code via a domain specific language and extends the popular unit testing framework \pkg{testthat} with many helpful expectation functions. \section{Acknowledgments} Part of the work on this paper has been supported by Deutsche Forschungsgemeinschaft (DFG) within the Collaborative Research Center SFB~876 \enquote{Providing Information by Resource-Constrained Analysis}, project~A3 (\href{http://sfb876.tu-dortmund.de}{http://sfb876.tu-dortmund.de}). \end{document}
arXiv
A survey of results on mobile phone datasets analysis Vincent D Blondel1, Adeline Decuyper1 & Gautier Krings1,2 108 Altmetric In this paper, we review some advances made recently in the study of mobile phone datasets. This area of research has emerged a decade ago, with the increasing availability of large-scale anonymized datasets, and has grown into a stand-alone topic. We survey the contributions made so far on the social networks that can be constructed with such data, the study of personal mobility, geographical partitioning, urban planning, and help towards development as well as security and privacy issues. As the Internet has been the technological breakthrough of the '90s, mobile phones have changed our communication habits in the first decade of the twenty-first century. In a few years, the world coverage of mobile phone subscriptions has raised from 12% of the world population in 2000 up to 96% in 2014 - 6.8 billion subscribers - corresponding to a penetration of 128% in the developed world and 90% in developing countries [1]. Mobile communication has initiated the decline of landline use - decreasing both in developing and developed world since 2005 - and allows people to be connected even in the most remote places of the world. In short, mobile phones are ubiquitous. In most countries of the developed world, the coverage reaches 100% of the population, and even in remote villages of developing countries, it is not unusual to cross paths with someone in the street talking on a mobile phone. Due to their ubiquity, mobile phones have stimulated the creativity of scientists to use them as millions of potential sensors of their environment. Mobile phones have been used, as distributed seismographs, as motorway traffic sensors, as transmitters of medical imagery or as communication hubs for high-level data such as the reporting of invading species [2] to only cite a few of their many side-uses. Besides these applications of voluntary reporting, where users install applications on their mobile phones in the aim to serve as sensor, the essence of mobile phones have revealed them to be a source of even much richer data. The call data records (CDRs), needed by the mobile phone operators for billing purposes, contain an enormous amount of information on how, when, and with whom we communicate. In the past, research on social interactions between individuals were mostly done by surveys, for which the number of participants ranges typically around 1,000 people, and for which the results were biased by the subjectivity of the participants' answers. Mobile phone CDRs, instead, contain the information on communications between millions of people at a time, and contain real observations of communications between them rather than self-reported information. In addition, CDRs also contain location data and may be coupled to external data on customers such as age or gender. Such a combination of personal data makes of mobile phone CDRs an extremely rich and informative source of data for scientists. The past few years have seen the rise of research based on the analysis of CDRs. First presented as a side-topic in network theory, it has now become a whole field of research in itself, and has been for a few years the leading topic of NetMob, an international conference on the analysis of mobile phone datasets, of which the fourth edition took place in April 2015. Closely related to this conference, a side-topic has now risen, namely the analysis of mobile phone datasets for the purpose of development. The telecom company Orange has, to this end, proposed a challenge named D4D, whose concept is to give access to a large number of research teams throughout the world to the same dataset from an African country. Their purpose is to make suggestions for development, on the basis of the observations extracted from the mobile phone dataset. The first challenge, conducted in 2013 was such a success that other initiatives such as this one have followed [3, 4], and the results of a second D4D challenge were presented at the NetMob conference in April 2015. Of course, there are restrictions on the availability of some types of data and on the projected applications. First, the content of communications (SMS or phone discussions) is not recorded by the operator, and thus inaccessible to any third party - exception made of cases of phone tapping, which are not part of this subject. Secondly, while mobile phone operators have access to all the information filed by their customers and the CDRs, they may not give the same access to all the information to a third party (such as researchers), depending on their own privacy policies and the laws on protection of privacy that apply in the country of application. For example, names and phone numbers are never transmitted to external parties. In some countries, location data, i.e., the base stations at which each call is made, have to remain confidential - some operators are even not allowed to use their own data for private research. Finally, when a company transmits data to a third party, it goes along with non-disclosure agreements (NDA's) and contracts that strongly regulate the authorised research directions, in order to protect the users' privacy. Recently, with the rise of smartphones, other methods of collecting data overcoming those drawbacks have been designed: projects such as Reality Mining [5], OtaSizzle [6], or Sensible DTU [7] consist in distributing smartphones to individuals who volunteered for the study. A previously installed software then records data, and these data are further used for research by the team that distributed the smartphones. This new approach overcomes the privacy problems, as participants are clearly informed and consent to their data being used. On the one hand, these projects gather very rich data, as they usually collect more than just call logs, but also bluetooth proximity data, application usage, etc…. On the other hand, the sample of participants is always much more limited than in the case of CDRs shared by a provider, and the dataset contains information on 1,000 participants at most. Yet, even the smallest bit of information is enough for triggering bursts of new applications, and day after day researchers discover new purposes one can get from mobile phone data. The first application of a study of phone logs (not mobile, though) appeared in 1949, with the seminal paper by George Zipf modeling the influence of distance on communication [8]. Since then, phone logs have been studied in order to infer relationships between the volume of communication and other parameters (see e.g. [9]), but the apparition of mobile phone data in massive quantities, and of computers and methods that are able to handle those data efficiently, has definitely made a breakthrough in that domain. Being personal objects, mobile phones enabled to infer real social networks from their CDRs, while fixed phones are shared by users of one same geographical space (a house, an office). The communications recorded on a mobile phone are thus representative of a part of the social network of one single person, where the records of a fixed phone show a superposition of several social actors. By being mobile, a mobile phone has two additional advantages: first, its owner has almost always the possibility to pick up a call, thus the communications are reflecting the temporal patterns of communications in great detail, and second, the positioning data of a mobile phone allows to track the displacements of its owner. Given the large amount of research related to mobile phones, we will focus in this paper on contributions related to the analysis of massive CDR datasets. A chapter of the (unpublished) PhD thesis of Gautier Krings [10] gives an overview of the literature on mobile phone datasets analysis. This research area is growing fast and this survey is a significantly expanded version of that chapter, with additional sections and figures and an updated list of references. The paper is organized following the different types of data that may be used in related research. In Section 2 we will survey the contributions studying the topological properties of the static social network constructed from the calls between users. When information on the position of each node is available, such a network becomes a geographical network, and the relationship between distance and the structure of the network can be analyzed. This will be addressed in Section 3. Phone calls are always localized in time, and some of them might represent transient relationships while others rather long-lasting interactions. This has led researchers to study these networks as temporal networks, which will be presented in Section 4. In Section 5, we will focus on the abundant literature that has been produced on human mobility, made possible by the spatio-temporal information contained in CDR data. As mobile phone networks represent in their essence the transmission of information or more recently data between users, we will cover this topic in Section 6, with contributions on information diffusion and the spread of mobile phone viruses. Some contributions combine many of these different approaches to use mobile phone data towards many different applications, which will be the object of Section 7. Finally, in Section 8 we will consider privacy issues raised by the availability and use of personal data. In its simplest representation, a dataset of people making phone calls to each other is represented by a network where nodes are people and links are drawn between two nodes who call each other. In the first publications related to telecommunications datasets, the datasets were rather used as an example for demonstration of the potential applications of an algorithm [11] or model [12] rather than for a purpose of analysis. However, it quickly appeared that the so-called mobile call graphs (MCG) were structurally different from other complex networks, such as the web and internet, and deserved particular attention, see Figure 1 for an example of snowball sampling of a mobile phone network. We will review here the different contributions on network analysis. We will address the construction of a social network from CDR data, which is not a trivial exercise, simple statistical properties of such networks and models that manage to reproduce them, more complex organizing principles, and community structure, and finally we will discuss the relevance of the analysis of mobile phone networks. Sample of a mobile phone network, obtained with a snowball sampling. The source node is represented by a square, bulk nodes by a + sign and surface nodes by an empty circle. Figure reproduced from [25]. While the network construction scheme mentioned above seems relatively simple, there exist many possible interpretations on how to define a link of the network, given a dataset. The primary aim of social network analysis is to observe social interactions, but not every phone call is made with the same social purpose. Some calls might be for business purposes, some might be accidental calls, some nodes may be call centers that call a large number of people, and all such interactions are present in CDRs. In short, CDRs are noisy datasets. 'Cleaning' operations are usually needed to eliminate some of the accidental edges. For example, Lambiotte et al. [13] imposed as condition for a link that at least one call is made in both directions (reciprocity) and that at least 6 calls are made in total over 6 months of the dataset. This filtering operation appeared to remove a large fraction of the links of the network, but at the same time, the total weight (the total number of calls passed in by all users) was reduced by only a small fraction. The threshold of 6 calls in 6 months may be questionable, but a stability analysis around this value can comfort that the exact choice of the threshold is not crucial. Similarly, Onnela et al. [14] analyzed the differences between the degree distribution of two versions of the same dataset, one containing all calls of the dataset, and the other containing only calls that are reciprocated. Some nodes in the complete network have up to 30,000 different neighbors, while in the reciprocated network, the maximal degree is close to 150. Clearly, in the first case it is hard to imagine that node representing a single person, while the latter is a much more realistic bound. However, even if calls have been reciprocated, the question of setting a meaningful weight on each link is far from easy. Li et al. suggest another more statistical approach in [15], and use multiple hypothesis testing to filter out the links that appeared randomly in the network and that are therefore not the mirror of a true social relationship. Further than these considerations of which calls or texts are representative of a true relationship, further corruptions of the data can arise from multiple calls, multiple text messages or calls reaching an answering machine. Depending on the context, it may be preferable to filter out these communications, but they remain, most of the time, difficult to identify in the datasets. Moreover, additional biases can arise from the pricing plans of the operator, putting a preferential price for SMSs or for voice calls, thus influencing the behavior of the users that opted for such a pricing plan [16]. It is sometimes convenient to represent a mobile call network by an undirected network, arguing that communication during a single phone call goes both ways, and set the weight of the link as the sum of the weights from both directions. However, who initiates the call might be important in other contexts than the passing of information, depending on the aim of the research, and Kovanen et al. have showed that reciprocal calls are often strongly imbalanced [17]. In the interacting pair, one user is often initiating most of the calls, so how can this be represented in an undirected network by a representative link weight? In a closely related question, most CDRs contain both information on voice calls and text messages, but so far it is not clear how to incorporate both pieces of information into one simple measure. Moreover, there seems to be a generational difference in the use of text messages or preference between texts and voice calls which may introduce a bias in measures that only take one type of communication into account [18]. Besides these considerations on the treatment of noise, the way to represent social ties may vary as well: they may be binary, weighted, symmetric or directed. Different answers to such decisions lead to different network characteristics, and result in diverse possible interpretations of the same dataset. For example, Nanavati et al. [19] keep their network as a directed network, in order to obtain information on the strongly connected component of the network, while Onnela et al. [14] rather focus on an undirected network, weighted by the sum of calls going in both directions. A few different options for definitions of link weights and measures on nodes are given in Table 1. Table 1 Definitions of node and link measures It is close to impossible to define unified construction rules functioning for any dataset, given the many sources of variance between two sets of CDRs. Besides the cases mentioned above, let us cite differences in social behaviors between inhabitants of different countries or differences in uses of uncaptured technologies such as email, landline or messaging. The construction of a social network from CDRs should always be of primary importance for the researcher, bearing in mind that there's no 'one size fits all' technique available. Topological properties The simplest information one can get out of CDRs is statistical information on the number of acquaintances of a node, on the local density of the network or on its connectivity. Like social networks, mobile call graphs differ from random networks and lattices by their broad degree distribution [20], their small diameter and their high clustering [21]. The level of clustering of a graph G is measured by its clustering coefficient, defined as the proportion of closed triangles among the connected triplets of nodes in G: $$ C(G) = \frac{6 \times\text{number of triangles}}{\text{number of paths of length }2}. $$ This coefficient takes values in the interval \([ 0, 1 ]\), and is found to be typically high in social networks and mobile call graphs. An alternative is to take the average of a local measure of the clustering around a given node i, defined as: $$ C(G) = \langle C_{i} \rangle, \quad\text{where } C_{i} = \frac{2 \times \text{number of triangles of which } i \text{ is a node}}{k_{i} (k_{i} -1)}, $$ where \(k_{i}\) is node i's degree, and hence \(k_{i} (k_{i}-1)\) is the maximum number of possible triangles around node i. The diameter of a graph G measures the greatest distance (in terms of number of edges) between any two vertices, and is typically small in social networks. As for the degree distribution, while all analyzed datasets present similar general shapes, their range and their fine shape differ due to differences between the datasets, the construction scheme, the size, or the time span of the collection period. In one of the first studies involving CDR data Aiello et al. [12] observed a power law degree distribution, which was well explained by a massive random graph model \(P(\alpha,\beta)\) described by its power-law degree distribution \(p(d=x) = e^{\alpha }x^{-\beta}\). Power-law distributions have often been observed in empirical datasets, but characterizing their parameters and determining whether the data really corresponds to a power-law distribution is not an easy question, as presented by Clauset et al. [22]. Random graph models have often been used in order to model networks, and manage to reproduce some observations from real-world networks, such as the small diameter and the presence of a giant component, such as observed on mobile datasets. However, they fail to uncover more complex features, such as degree-degree correlations. Nanavati et al. [19] observed in the study of 4 mobile datasets that besides the power-law tail of the degree distribution, the degree of a node is strongly correlated with the degree of its neighbors. Characterizing the exact shape of the degree distribution is not an easy task, which has been the focus of a study by Seshradi et al. [23]. They observed that the degree distribution of their data can be fitted with a Double Pareto Log Normal (DPLN) distribution, two power-laws joined by a hyperbolic segment - which can be related to a model of social wealth acquisition ruled by a lognormal multiplicative process. Those different degree distributions are depicted on Figure 2. Interestingly, let us note that the time span of the three aforementioned datasets are different, Aiello et al. have data over one day, Nanavati et al. over one week, and Seshadri et al. over one month. Degree distributions in mobile phone networks. The degree distributions of several datasets have comparable features, but differences in the construction, the time range of the dataset and the size of the system lead to different shapes. Note the bump in (d), when non-reciprocal links are taken into account. (a) Aiello, W. et al., 'A random graph model for massive graphs', in: Proceedings of the thirty-second annual ACM symposium on theory of computing, pp 171-180 [12] ©2000 Association for Computing Machinery, Inc. Reprinted by permission. http://doi.acm.org/10.1145/335305.335326. (b) Nanavati, A.A. et al., 'On the structural properties of massive telecom call graphs: findings and implications', in: Proceedings of the 15th ACM international conference on information and knowledge management, pp 435-444 [19] ©2006 Association for Computing Machinery, Inc. Reprinted by permission. http://doi.acm.org/10.1145/1183614.1183678. (c) Seshadri, M. et al., 'Mobile call graphs: beyond power-law and lognormal distributions.' in: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining, pp 596-604 [23] ©2008 Association for Computing Machinery, Inc. Reprinted by permission. http://doi.acm.org/10.1145/1401890.1401963. (d) Figure reproduced from [25]. Krings et al. dig a bit deeper into this topic, and investigated the effect of placement and size of the aggregation time window [24]. They showed that the size of the time window of aggregation can have a significant influence on the distributions of degrees and weights in the network. The authors also observed that the degree and weight distributions become stationary after a few days and a few weeks respectively. The effect of the placement of the time window has most influence for short time windows, and depends mostly on whether it contains holiday periods or weekends, during which the behavioral patterns have been shown to be significantly different than during normal weekdays. What information do we get from these distributions? They mostly reflect the heterogeneity of communication behaviors, a common feature for complex networks [20]. The fat tail of the degree distribution is responsible for large statistical fluctuations around the average, indication that there is no particular scale representative of the system. The majority of users have a small number of contacts, while a tiny fraction of nodes are hubs, or super-connectors. However, it is not clear whether these hubs represent true popular users or are artefacts of noise in the data, as was observed by Onnela et al. [25] in their comparison of the reciprocated and non-reciprocated network. The heterogeneity of degrees is also observed on node strengths and link weight, which is also to be expected for social networks. All studies also mention high clustering coefficient, which indicates that the nodes arrange themselves locally in well-organized structures. We will address this topic in more detail further. However large the datasets studied may be, one may still question the significance of the measures presented above. The data studied is always about only a limited (yet significant) sample of the population, and it is very difficult to examine whether this sample is biased or not, without additional information on the population of the country and the users of the mobile provider considered. We will discuss this topic further in Section 7, but the topological properties presented in this section, such as the degree distributions should be analyzed with care, as one may only expect qualitatively close results for similar datasets on similar populations. If the sample of the population studied is biased, and one only observes people that share specific characteristics (such as age, gender, or a specific profile), then topological properties might be very different from those characterizing the mobile phone networks presented in the above paragraph. We have discussed the impact of the size of the time window of observation on the results. Most studies usually try to infer general trends characterizing the network of acquaintances of a population, based on observation during only a finite time window. The ideal model capturing the network of people would be based on a very long time of observation of interactions between people of the whole population around the globe. This is of course impossible to achieve, and limited time window of observations as well as population sample introduce inherent biases in the results, even though these biases are almost impossible to characterize and quantify. Advanced network characteristics Beyond statistical distributions, more complex analyses provide a better understanding of the structure of our communication networks. The heterogeneity of link weights deserves particular attention. Strong links represent intense relationships, hence the correlation between weight and topology is of primary interest. Recalling that mobile call graphs show high clustering coefficient, and thus are locally dense, one can differentiate links based on their position in the network. The overlap of a link, introduced in [14] (and illustrated on Figure 3), is an appropriate measure which characterizes the position of a link as the ratio of observed common neighbors \(n_{ij}\) over the maximal possible, depending on the degrees \(k_{i}\) and \(k_{j}\) of the nodes and defined as: $$ O_{ij} = \frac{n_{ij}}{(d_{i} - 1) + (d_{j} - 1) - n_{ij}}. $$ The authors show that link weight and topology are strongly correlated, the strongest links lying inside dense structures of the network, while weaker links act as connectors between these densely organized groups. This finding has an important consequence on processes such as link percolation or the spread of information on networks, since the weak ties act as bridges between disconnected dense parts of the network, illustrating Granovetter's hypothesis on the strength of weak ties [26]. Overlap of a link in a network. (Left) The overlap of a link is defined as the ratio between the common neighbors of both nodes and the maximum possible common neighbors. Here, the overlap is given for the green link. (Right) The average overlap increases with the cumulative weight in the real network (blue circles) and is constant in the random reference where link weights are shuffled (red squares). The overlap also decreases with the cumulative betweenness centrality \(P_{cum}(b)\) (black diamonds). Figure reproduced from [14]. The structure of the dense subparts of the network provides essential information on the self-organizing principles lying behind communication behaviors. Before moving to the analysis of communities, we will focus on properties of cliques. The structure of cliques is reflected by how weights are distributed among their links. In a group where everyone talks to everyone, is communication balanced? Or are small subgroups observable? A simple measure to analyze the balance of weights is the measure of coherence \(q(g)\). This measure was introduced in [27] before its application to mobile phone data in [25], and is calculated as the ratio between the geometric mean of the link weights and the arithmetic mean, $$ q(g) = \frac{ (\prod_{ij \in l_{g}}w_{ij} )^{1/|l_{g}|}}{\frac{\sum_{ij\in l_{g}}w_{ij}}{|l_{g}|}}, $$ where g is a subgraph of the network and \(l_{g}\) is its set of links. This measure takes values in the range \(]0,1]\), 1 corresponding to equilibrium. On average, cliques appear to be more coherent than what would be expected in the random case, in particular for triangles, which show high coherence values. On a related topic, Du et al. [28] focused instead on the propensity of nodes to participate to cliques, and in particular on the balance of link weights inside triangles. Their observations differ slightly from Onnela et al.: on average, the weights of links in triangles can be expressed as powers of one another. The authors managed to reproduce this singular situation with a utility-driven model, where users try to maximize their return from contacts. The previous analysis of cliques and triangles opens the way for an analysis of more complex structures, such as communities in mobile phone networks. The analysis of communities provides information on how communication networks are organized at large scale. In conjunction with external data, such as age, gender or cultural differences, it provides sociological information on how acquaintances are distributed over the population. From a corporate point of view, the knowledge of well-connected structures is of primary importance for marketing purposes. In this paragraph, we will only address simple results on community analysis, but this topic will be addressed again further in the document, when it relates to geographic dispersal of networks or dynamical networks. At small scale, traditional clustering techniques may be applied, see [29] and [30] for examples of applications on small datasets. However, on large mobile call graphs involving millions of users, such clustering techniques are outplayed by community detection algorithms. Uncovering the community structure in a mobile phone network is highly dependent on the used definition of communities and detection method. One could argue that there exist as many plausible analyses as there are community detection methods. Moreover, the particular structure of mobile call graphs induces some issues for traditional community detection methods. Tibely et al. [31] show that even though some community detection methods perform well on benchmark networks, they do not produce clear community structures on mobile call graphs. Mobile call graphs contain many small tree-like structures, which are badly handled by most community detection methods. The comparison of three well-known methods: the Louvain method [32], Infomap [33] and the Clique Percolation method [34] produce different results on mobile call graphs. The Louvain method and Infomap both build a partition of the nodes of the network, so that every node belongs to exactly one community. In contrast Clique Percolation only keeps as community dense subparts of the network (see Figure 4). Examples of communities detected with different methods. The different methods are the InfoMap method (IM, red), Louvain method (LV, blue) and Clique percolation method (CP, green). For each method, four examples are shown, with 5, 10, 20 and 30 nodes. The coloured links are part of the community, the grey nodes are the neighbors of the represented community. While IM and LV find almost tree-like structures, CP finds dense communities [31]. Reproduced figure with permission from Tibély, G. et al., Phys Rev E 83(5):056125, 2011. Copyright (2011) by the American Physical Society. http://dx.doi.org/10.1103/PhysRevE.83.056125. As observed in Tibely et al. the small tree-like structures are often considered as communities, although their structure is sparse. Such a result is counter-intuitive given the intrinsic meaning of communities and raises the question: is community detection hence unusable on mobile call graphs? The results have probably to be considered with caution, but as this is always the case for community detection methods, whatever network is used, this special character of communities in mobile call graphs appears rather as a particularity than a problem. Although they might have singular shapes, communities can provide significant information, when usefully combined with external information. Proof is made by the study of the linguistic distribution of communities in a Belgian mobile call graph [32], where the communities returned by the Louvain method strikingly show a well-known linguistic split, as illustrated on Figure 5. Community detection in Belgium. (Top) The communities of the Belgian network are colored based on their linguistic composition: green for Flemish, red for French. Communities having a mixed composition are colored with a mixed color, based on the proportion of each language. (Bottom) Most communities are almost monolingual. Figures reproduced from [32]. The notion of communities in social networks, such as rendered by mobile phone networks, has raised a debate on the exact vision one has of what a community is and what it is not. In particular, several authors have favored the idea of overlapping communities, such that one node may belong to several communities, in opposition with the classical vision that communities are a partition of the nodes of a network. An argument in favor of this vision is that one is most often part of several groups of acquaintances who do not share common interests, such as family, work and sports activities. In [35], Ahn et al. show how overlapping communities can be detected by partitioning edges rather than nodes, and illustrated their methods with a mobile phone dataset. For each node, they had additional information about its center of activities, with which they showed that communities were geographically consistent. This discussion of the exact definition of communities and of the best method to detect them can further be influenced by a series of factors. Indeed, one could want to introduce additional information before searching for communities such as, for example, age, gender or specific profiles of people. Moreover, when spatio-temporal information is available, one could want to detect strong communities whose links remain active through time, or detect which geographic areas belong to the same community thus partitioning space, as we will explain further in this paper. Finally, as mentioned in the previous paragraph, if the sample of the population that is available is biased, the structure of the network, and hence also the detection of communities may be influenced. All these additional considerations might significantly change the result of the detection of communities, including their internal topology. The use of mobile call data in the purpose of analysis of social relationships raises two questions. First, how faithful is such a dataset of real interactions? Second, can we extract information on the users themselves from their calling behavior? It has often been claimed that mobile phone data analysis is a significant advance for social sciences, since it allowed scientists to use massive datasets containing the activity of entire populations. The study of mobile phone datasets is part of an emerging field known as computational social science [36]. These massive datasets, it is said, are free from the bias of self-reporting, which is that the answers to a survey are usually biased by the own perception of the subject, who is not objective. Still, the question remains: how much does self-reporting differ from our real behavior, what is the exact added value of having location data? This has been studied by Eagle et al. [37] in the well-known Reality Mining project. By studying the behavior of about 100 persons both by recording their movements and encounters using GSM and Bluetooth technology and with the use of surveys, they managed to quantify the difference between self-reported behavior and what could be observed. It appears that observed behavior strongly differs from what has been self-reported, confirming that the subjectivity of the subjects' own perception produces a significant bias in surveys. In contrast, collected data allows to reduce this bias significantly. However, mobile phone data introduce a different bias, namely, that they only contain social contacts that were expressed through phone calls, thus missing all other types of social interactions out [38]. While most studies use external data as validation tool to confirm the validity of results, Blumenstock et al. shortly addressed a different question, namely if it was possible to infer information on people's social class based on their communication behavior. Apparently, this task is hard to perform, even if significant differences appear in calling behavior between different classes of the population [39]. While inferring information about users from their calling activity still seems difficult, many studies show strong correlations between calling behavior and other information included in some datasets, such as gender or age. In a study on landline use, Smoreda et al. highlight the differences in the use of the domestic telephone based on the genders of both the caller and the callee [40], and show not only that women call more often than men but also that the gender of the callee has more influence than the gender of the caller on the duration of the call. Those same trends have also been observed in later studies of mobile phone datasets [41]. Further than just observing the gender differences in mobile phone use, Frias-Martinez et al. propose a method to infer the gender of a user based on several variables extracted from mobile phone activity [42], and achieve a success rate of prediction between 70% and 80% on a dataset of a developing economy. In a later study on data from Rwanda, Blumenstock et al. show that differences of social class induce more striking differences in mobile phone use than differences of gender [43]. Further than analyzing the nodes of a network, Chawla et al. take a closer look at the links of the network, and introduce a measure of reciprocity to quantify how balanced the relationship between two users is [44]: $$ R_{ij} = \bigl| \ln(p_{ij}) - \ln(p_{ji}) \bigr|, $$ where \(p_{ij}\) is the probability that if i makes a communication, it will be directed towards j. They also test this measure on a mobile communications dataset, and show that there are very large degrees of non-reciprocity, far above what could be expected if only balanced relationships were kept. Going one step further, instead of inferring information on the nodes of the mobile calling graph, Motahari et al. study the difference in calling behavior depending on the relationship between two subscribers, characterizing different types of links. They show that the links within a family generate the highest number of calls, and that the network topology around those links looks significantly different from the topology of a network of utility communications [45]. If we can infer so much information from looking at mobile phone communications, would it be possible to predict existing acquaintances that are unobserved in the dataset available? This question is known as the link prediction problem, and has been addressed by several research teams. As the approach usually takes into account the time component of the dataset, we will address this topic further in Section 4. Adding space - geographical networks Besides basic CDR data, it happens that geographic information is available about the nodes, such as the home location (available for billing purposes) or the most often used antenna. This allows then to assign each node to one geographic point, and to study the interplay between geography and mobile phone usage. Studies on geographical networks have already been performed on a range of different types of networks [46]. One of the very basic applications is to use mobile phone data to estimate the density of population in the different regions covered by the dataset. Deville et al. explored this idea [47], using the number of people who are calling from each antenna, they are able to produce timely estimates of the population density in France and Portugal. In the developing world, census data is often very costly or even impossible to obtain, and existing data is often very old and outdated. Using CDRs can then provide very useful and updated information on the actual density of population in remote parts of the world. Another example is given by Sterly et al. who mapped an estimate of the density of population of Ivory Coast using a mobile phone dataset [48], as illustrated on Figure 6. Population density estimates. (Left) Population density estimates from the Afripop project [214]. (Right) Population density estimates from mobile phone data. Figure reproduced from [48]. Relationship space-communication Lambiotte et al. [13], investigated the interplay between geography and communications, and assigned each of the 2.5 million users from a Belgian mobile phone operator to the ZIP code location where they were billed. By approximating the position of the users to the center of each ZIP code area, they showed that the probability of two users to be connected decreases with the distance r separating them, following a power law of exponent −2. The probability of a link to be part of a triangle decreases with distance, until a threshold distance of 40 km, after which the probability is constant. Interestingly, this threshold of 40 km is also a saturation point for the average duration of a call (see Figure 7). A different study on the same dataset also showed that total communication duration between communes in Belgium was well fitted by a gravity law, showing positive linear contribution of the number of users in each commune and negative quadratic influence of distance [49, 50]: $$ l_{ab} = \frac{c_{a}c_{b}}{r_{ab}^{2}}, $$ where \(l_{ab}\) represents the total communication between communes a and b, \(c_{a}\) and \(c_{b}\) the number of customers in each commune and \(r_{ab}\) the distance that separates them. Average duration of a call depending on the distance between the callers. A saturation point is observed at 40 km. Figure reproduced from [13]. While it seems sure that distance has a negative impact on communication, its exact influence is not unique. Onnela et al. [51] observed in a different dataset a probability of connection decreasing as \(r^{-1.5}\) rather than the gravity model observed by Lambiotte et al., and a later study on Ivory Coast by Bucicovschi et al. [52] observe that the total duration of communication between two cities decays with \(r^{-1/3}\). However, these differences might be explained by the differences that exist between the studied countries, such as the distribution of the population density. A different study on mobility data from the location-based service Foursquare [53] levelled those variations using a rank-based distance [54], which could also be helpful in this case. Another comparison is presented by Carolan et al. [55] who compare two different types of distance, namely the spatial travel distance and the travel time taken to link two cities. Interestingly, it appears that the use of the spatial distance rather than the time taken gives a better fit of the number of communications between two cities with the gravity model. Their observations also show that the gravity model fits the data better when data is collected during the daytime on weekdays than during evenings and weekends. Instead of studying the communication between cities, Schläpfer et al. looked at the relationship between city size and the structure of local networks of people living in those cities [56]. They show that the number of contacts and communication activity both grow with city size, but that the probability of being friends with a friend's friend remains the same independently of the city size. Jo et al. propose another approach and study the evolution with age of the distance between a person and the person with whom they have the most contacts [57]. They thus show that young couples tend to live within longer distances than old couples. Instead of only taking into account the distance between two places to predict the number links between them, Herrera-Yagüe et al. make another hypothesis, namely that the probability of someone living in a location i has contacts with a person living in another location j is inversely proportional to the total population within an ellipse [58]. The ellipse is defined as the one whose foci are i and j, and whose surface is the smallest such that both circles of radius \(r_{ij}\) centred around i and j are contained in the ellipse. If we name \(e_{ij}\) the total population within the ellipse, the number of contacts between locations i and j is thus described by: $$ T_{ij} = K\frac{n_{i}n_{j}}{e_{ij}}, $$ where K is a normalisation parameter depending on the total number of relationships to predict, and \(n_{i}\) and \(n_{j}\) are the populations of locations i and j respectively. Further, Onnela et al. also studied the geographic structure of communities, and showed on the one hand that nodes that are topologically central inside a community may not be central from a geographical point of view, and on the other hand that the geographical shape of communities varies with their size. Communities smaller than 30 individuals show a smooth increase of geographical span with size, but bounces suddenly at the size of 30, which could not be clearly explained by the authors, see Figure 8. Average geographic span (red) for communities and average geographic span for the null model (blue). A bump is observed for communities of size 30 and more, which could not be reproduced by the different null models. Figure reproduced from [51]. Geographic partitioning The availability to place customers in higher level entities, such as communes or counties, gave researchers the idea of drawing the 'social borders' inside a country based on the interactions between those entities [59]. Individual call patterns of users are aggregated at a higher level to a network of entities, which can in turn be partitioned into a set of communities based on the intensities of calls between the nodes of this macroscopic network. It is important to notice that, in contrast with the microscopic network (the network of users), the macroscopic network is not a sparse network at all. Since the nodes represent the aggregated behavior of many users, there is a high chance of having a link between most pairs of communes or counties. Hence, the weights on the links of the macroscopic network are of crucial importance, since they define the complete structure of the network. Such a partition exercise using CDR datasets has been applied, among others, on Belgium, or Ivory Coast [52, 60]. An initial study of the communities in Belgium [61] used the Louvain method optimizing modularity for weighted directed networks to partition the Belgian communes based on two link weights: the frequency of calls between two communes and the average duration of a call. The obtained partitions were geographically connected, with the influence of distance, of influential cities, and the cultural barrier of language being observable in the optimal partitions. Given that the intensity of communication between two cities can be well-modeled by a gravity law, Expert et al. [62] proposed to replace Newman's modularity by a more appropriate null model, given that geographic information was available. The spatial modularity (SPA) compares the intensity between communes to a null model influenced both by the sizes \(c_{a}\) and \(c_{b}\) of the communes and the distance that separates them $$ p_{ab}^{\mathrm{Spa}} = c_{a}c_{b}f(r_{ab}). $$ The influence of distance is estimated from the data by a function f, which is calculated for distance bins \([r-\epsilon, r+\epsilon]\) as $$ f(r) = \frac{\sum_{a,b|r_{ab}\in[r-\epsilon,r+\epsilon ]}A_{ab}}{\sum_{a,b|r_{ab}\in[r-\epsilon,r+\epsilon]}c_{a}c_{b}}. $$ Using their null model, the authors obtained an almost perfect bipartition of the Belgian communes which renders the Belgian linguistic border. Moreover, they showed with a simple example that such a null model allows to remove the influence of geography and obtain communities showing geography-independent features. On an identical topic, Ratti et al. used an algorithm of spectral modularity optimization, to partition the map of Great Britain [63] based on phone calls between geographic locations. Similarly to results obtained on Belgium, they obtained spatially connected communities after a fine-grain tuning of their algorithm, which correspond to meaningful areas, such as Scotland or Greater London, see Figure 9. A stability analysis of the obtained partition showed that while some variation appears on the boundary of communities, the obtained communities are geographically centered at the same place. The intersections between several results of the same algorithm showed 11 spatially well-defined 'cores' corresponding to densely populated areas of Great Britain. Interestingly, the map of the cores loosely corresponds to the historical British regions. Geographic partitioning of countries. (Top) Communities in Belgium, obtained through modularity optimization. Communities are geographically well-balanced and are centred around important cities (gray dots). Figure reproduced from [61]. (Bottom) Communication network in Great Britain (80% of strongest links). The colors correspond to the communities found by spectral modularity optimization. Figure reproduced from [63]. A later study using the data of antenna to antenna volumes of communications in Ivory Coast confirmed the very strong influence of language on the formation of communities in a large country. Using the same method as was used by Blondel et al. for the Belgian dataset, they show that the borders of the communities formed in Ivory Coast strongly correlate with the language borders, even in the presence of much more than two language groups [52]. Going a bit further, Blumenstock et al. introduce a measure of the social and spatial segregation that can be observed through mobile phone communication records [64]. They define the spatial segregation as the proportion of people from ethnicity t in a region r as: $$ w_{tr} = \frac{N_{tr}}{N_{r}}, $$ where \(N_{r}\) is the total population of region r. They also define social segregation of ethnicity t as the fraction of contacts that individuals of ethnicity t form with the same type of people: $$ H_{tr} = \frac{s_{t}}{s_{t} + d_{t}}, $$ where \(s_{t}\) is the number of contacts that a person of type t has with people from the same ethnicity, and \(d_{t}\) is the number of contacts that people of type t have with people from other ethnicities. With these measures, it is then possible to map the more or less segregated parts of a city, see which ethnicities occupy which regions, and show how strong or weak the links between these ethnicities are. Communications reveal regional economy Lately, with the growth of mobile phone coverage even in the most remote regions of the developing world, a new question has risen, namely: is it possible to use CDR data to evaluate the socio-economic state of the different regions of a country? Being able to estimate and update poverty rates in different regions of a country could help governments make informed political decisions knowing how their country is developing economically. A first step in that direction was explored by Eagle et al. in a study using data from the UK [65]. The authors investigated if some relationship could be found between the structure of a user's social network and the type of environment in which they live. Using both CDRs of fixed landline (99% coverage) and mobile phones (90% coverage), they showed that the social and geographical diversity of nodes' contacts, measured using the entropy of contact frequencies, correlates positively with a socio-economic factor of the neighborhood. Given a node i, calling each of his \(d_{i}\) neighbors j at frequency \(p_{ij}\), and calling each of the A locations a at frequency \(p_{ia}\), his social and spatial diversity are given by $$ D_{\mathrm{social}}(i) = \frac{-\sum_{j}p_{ij}\log p_{ij}}{\log d},\quad\quad D_{\mathrm{spatial}}(i) = \frac{-\sum_{a} p_{ia}\log p_{ia}}{\log A}, $$ which is 1 if the node has diversified contacts. On Figure 10, the authors compare a composite measure of both diversities with the socio-economic factor of the neighborhood. Average social wealth as a function of social and geographic diversity. From Eagle et al., Network diversity and economic development, Science 328(5981):1029 (2010) [65]. Reprinted with permission from AAAS. In a recent study, this time with data from Africa, Mao et al. tried to determine which characteristics of the mobile phone network could best describe the socio-economic status of a developing region [66]. They introduce an indicator named CallRank, obtained by running the weighted PageRank algorithm on an aggregated mobile calling graph of Ivory Coast, where nodes are the antennas and the weight of the links are the number of calls between each pair of antenna. They observe that a high CallRank index seems to correspond well to a region that is important for the national economy. However, lacking accurate data to validate the results, they only conclude that this measure is probably a good indicator, without being able to evaluate its accuracy quantitatively. Another analysis of the same dataset was proposed by Smith-Clarke et al. who extracted a series of features to see which ones showed the best correlation with poverty levels [67]. The authors show that besides the total volume of calls, poverty levels are also linked to deviations from the expected flow of communications: if the amount of communications is significantly lower than expected from and to a certain area, then higher poverty levels are to be expected in that area. Another indicator of poverty was also explored by Frias-Martinez et al. who analyzed the link between the mobility of people and socio-economic levels of a city in Latin-America [68]. The authors propose several measures to quantify the mobility of users, and show that socio-economic levels present a linear correspondence with three indicators of mobility, namely the number of different antennas used, the radius of gyration and diameter of the area of often visited locations, indicating that the more mobile people are, the less poor the area in which they live seems to be. In a further study by the same research group, Frias-Martinez et al. go one step further, and propose a method not only to estimate, but also to forecast future socio-economic levels, based on time series of different variables gathered from mobile phone data [69]. They show preliminary evidence that the socio-economic levels could follow a pattern, allowing for prediction with mobile phone data. Another valuable, and rather new, source of data extracted from mobile phone activity is the history of airtime purchases of each user. Using this data on the network of Ivory Coast, Gutierrez et al. propose another approach to infer the socio-economic state of the different regions of a developing country [70]. The authors make the hypothesis that people who make many small purchases are probably less wealthy than those who make fewer larger purchases, supposing that the poorer will not have enough cash flow to buy large amounts at the same time. Figure 11 shows the map of average purchases throughout the country. Here again, lacking external reliable data to validate those results and compare them with socio-economic data, the authors provide an interpretation of the differences observed between the different regions, and show that the hypothesis they make seems plausible. Average purchases of airtime credit in Ivory Coast. (a) Abidjan, (b) Liberian border, (c) Roads to Mali and Burkina Faso, (d) Road to Ghana. Figure reproduced from [70]. Adding time - dynamical networks A particularity of a mobile call graph is that the links are very precisely located in time. Although each call has a precise time stamp and duration, the previously presented studies consider mobile call graphs as static networks, where edges are aggregated over time. This aggregation leads to a loss of information on the one hand about the dynamics of the links (some may appear or disappear during the collection period) but on the other hand about the dynamics on the links. Recently, some authors have attempted to avoid this issue by taking the dynamical component of links into account in the definition of such networks. The topic of dynamical - or temporal - networks has been studied broadly regarding several types of networks [71], but the study of mobile phone graphs as evolving ones is rather recent, and given their inherent dynamical nature, mobile call graphs are excellent sources of information for such studies. Dynamics of structural properties One such question regards the so-called link prediction problem, that is, predict for a future time window, whether a given link will appear or disappear from the network. This problem has already been studied in machine learning and applied to different empirical networks such as e-mails and co-authorship [72] or movie preferences [73]. Some work has already been carried out in this framework using mobile phone data for social network analysis, of which we give a short overview here. How long does a link last in a network? By analyzing slices of 2 weeks of a mobile phone network, Hidalgo and Rodriguez-Sickert observed that the frequency of presence of links in the different slices, the persistence, followed a bimodal distribution [74], as illustrated on Figure 12. The persistence of link \((i,j)\) is defined as: $$ p_{ij} = \frac{\sum_{T} A_{ij}(T)}{M}, $$ where \(A_{ij}(T)\) is 1 if the link \((i,j)\) is in slice T and 0 otherwise, M being the number of slices. Most links in the network are only present in one window, and the probability of a link to be observed in several windows decreases with the number of windows, but there is an unexpectedly large number of links that are present in all windows. These highly recurrent links represent thus strong temporally consistent relationships, in contrast with the large number of volatile connections appearing in only one of the slices. A deeper analysis of correlations between the persistence and static measures further shows that clustering, reciprocity and high topological overlap are usually associated with a strong persistence. Measures of the strength of links over time. (Left) Distribution of the persistence of links. (Right) The fraction of surviving links as a function of time follows a power-law like decrease [74]. Figures reprinted from Phys A: Stat Mech Appl, 387(12), Hidalgo, C.A. and Rodriguez-Sickert, C. The dynamics of a mobile phone network, pp 3017-3024, Copyright (2008), with permission from Elsevier. Raeder et al. [75] dig a bit deeper into that last topic, by attempting to predict which link will decay and which will persist, based on several local indicators. They quantify the information provided by each indicator with the decrease of entropy on the probability of an edge to persist, and obtain that the most informative indicators are the number of calls passed between both nodes as well as its scaled version. By trying both a decision-tree classifier and a logistic regression classifier, they manage to predict correctly about 70% of the persistent edges and decays. While most approaches to the link prediction problem use the network structure and similarity between nodes to extract information on future interactions [74–76], Miritello goes a step further and introduces temporal components to build a new link prediction model [77]. The author defines the temporal stability of a link as: $$ \Delta_{ij}=t_{ij}^{\mathrm{max}}-t_{ij}^{\mathrm{min}}, $$ where \(t_{ij}^{\mathrm{min}}\) and \(t_{ij}^{\mathrm{max}}\) are, respectively, the time instants at which the link has first and last been active in the observation period T. High values of temporal stability, close to T, indicate that the link may be active beyond the time window of observation, whereas small values of \(\Delta_{ij}\) may indicate that the link was only active for a short time. Miritello further shows that introducing temporal components in a link prediction model significantly improves the performance, and presents a threshold model achieving 80% accuracy in predicting whether a link will decay or persist in a future time window. Addressing a related question, Miritello also observed that it is difficult to distinguish between a link that has decayed and a link that simply has not been observed for a long time, especially when the time window of observation is only a few months long [77]. On a very close topic, Karsai et al. studied how the weights of the links in a network vary with time, how strong ties form, and how this process is related to the formation of new ties [78]. They start by measuring the probability \(p_{k}(n)\) that the next communication of an individual that has degree n will occur with the formation of a new \((n+1)\)th tie. This probability depends on the parameter k that corresponds to the final degree of the individual at the end of the observation period. They find that the process of the formation of new ties follows a very consistent pattern, namely $$ p_{k}(n) = \frac{c(k)}{n+c(k)}, $$ where \(c(k)\) is an offset constant that depends on the degree k considered. Using the measured c for each degree class, the authors then show that rescaling the distributions \(p_{k}(n)\) allows to collapse all curves into one (see Figure 13), suggesting that the evolution of the ego-network of each individual is governed by roughly the same mechanism. Probability of a new communication to form a new tie. Probability functions \(p_{k}(n)\) calculated for different degree groups. In the inset, symbols show the averaged \(p_{k}(n)\) values for groups of nodes with degrees between the corresponding \(k_{\mathit{min}}\) values. Figure reprinted by permission from Macmillan Publishers Ltd: Sci Rep [78], copyright (2014). The reasons for the decay and persistence of links remain various and unknown. However, Miritello et al. addressed a related question, namely how many links can a person maintain active in time [79]? By looking at a large time-window (around 19 months of data), they evaluate how many contacts are new acquaintances, and how many ties are de-activated during a smaller time-window. It appears that individuals show a finite communication capacity, limiting the number of ties that they are able to maintain active in time: in the network of a single user, the number of active ties remains approximately constant on the long term. From a social point of view, apart from the balanced social strategy between a user's communication capacity and activity, the authors discern between two kinds of rather extreme behavior that they name social explorer and social keeper. While the social explorer shows a very high turnover in his social contacts and has a very high activity compared to his capacity, keeping only a very little stable network, the social keeper has a very stable social circle, and only has a very small pace of activating and deactivating ties. The authors further show that the social strategy of an individual can be linked to the topology of its local network. In a related paper, Miritello et al. [80] further show that even though people who have a large network tend to spend more time on the phone than those who have few contacts, the total communication time seems to reach a maximum, and the strength of ties starts decaying for people who have more than 40 contacts. Despite this turnover in links and the fact that links appear and disappear, there seems to be some consistency in a person's network of contacts. In a related study, Saramäki et al. showed how a turnover in contacts did not imply a change in the structure of the local network around a person [81]. They study a network of students who, during the time window covered by the dataset, move from high school to college. Despite the very high turnover in a user's contacts, the distribution of the weights on the links around the user, that the authors call the social signature of this user, stays very similar through time. From an evolving network perspective, the question of stability and survival of communities is closely linked to the previous questions. Palla et al. studied the temporal stability of a mobile phone network [34], analyzing communities detected on slices of two weeks. They observed that communities have different conditions to survive, depending on their size; small communities require to be stable, while large groups require to be highly dynamic and often change their composition. Recently, several community detection techniques developed for static graphs have been extended to take into account the dynamics of human interactions [82]. One approach of this question is to detect communities in a multislice network, where each slice represents the network at a given point in time, and nodes of a slice are linked to their counterparts in adjacent slices [83]. However, to our knowledge this approach hasn't yet been applied to mobile phone networks. Using another approach on the Reality Mining dataset, Xu et al. detect communities using evolving adjacency matrices [84], and show that this approach gives more consistency to the results than detecting communities independently in subsequent snapshots of the network [85]. On a shorter time scale, Kovanen et al. identified temporal motifs of sequences of adjacent events involving a small number of nodes (typically 3 or 4) [86]. Events are said to be Δt-adjacent if they have at least one node in common, and the timing between the two events is less than Δt (typically of the order of minutes). The authors analyze the most common motifs present in a mobile phone database and find that the most common temporal motifs of three events involve only two nodes, and motifs that allow a causal hypothesis are more frequent than those that do not. The availability of timestamps in datasets allows to segment the calls between office hours and home hours. By supposing that calls made during office hours are for a purpose of business, while private calls are made early morning, in the evening or over the weekend, Kirkpatrick et al. managed to build two separate networks based on a mobile and landline dataset from the UK [87]. The degree and clustering coefficient distributions of both networks are mostly similar, but a deeper analysis of the network structure shows that some important differences exist between them. By decomposing the network into k-cores and monitoring the speed of information diffusion, they observe that the work network is much more con nected than the leisure network, and that information diffuses almost twice as fast. Addressing a related question, Pielot et al. present a method to predict the attentiveness to an instant message, that is, the time it will take to the receiver to attend the message [88]. They use data of the user's interaction with the smartphone to predict with 70% accuracy how fast the receiver will pay attention to the communication. Burstiness The dynamics of many random systems are modeled by a Poisson process, where the average interval between two events is distributed following an exponential, well-characterized by its average. However, it has appeared that human interactions show a different temporal pattern, with many interactions happening in very short times, separated by less frequent long waiting times [89]. The same holds for mobile phone calls. Karsai et al. studied the implications of the bursty patterns on the links of a mobile call graph [90]. They observed that indeed, the inter-event time ranges over a multiple orders of magnitude, and in particular, the burstiness of human communication induces long waiting times, which slows down the spreading of information over the network (see Section 6 for more results on spreading processes). In a further paper [91], Karsai et al. also analyzed the distribution of numbers of events in bursty cascades, thus better explaining the correlations and heterogeneities in temporal sequences that arise from the effects of memory in the timing of events. In another study, Wu et al. find that the distribution of times between two consecutive events is neither a power-law nor exponential, but rather a bimodal distribution represented by a power-law with an exponential tail [92]. It is interesting to note that in the previous papers, the authors observed the inter-event time on links, by sorting links by weight. In [93], Candia et al. perform a similar task but for nodes, and measure the inter-event time for nodes, by grouping them based on the number of calls they made. Similarly to Karsai et al.'s observations, the inter-event times range over several orders of magnitude, and the distribution is shifted to higher inter-event times for nodes of lower activity. By rescaling with the average of each distribution, the inter-event time distributions collapse into a single curve fitted by a power law with exponent 0.9 followed by an exponential cutoff at 48 days. $$ p(\Delta T) = (\Delta T)^{-\alpha}exp(\Delta T/\tau_{c}). $$ In a further paper, Karsai et al. study bursty trains, and show that the burstiness observed in communication networks is mainly a link property, rather than a node property [94]. They show that bursty trains usually involve the same pair of individuals, rather than one node and several of their neighbors. They further observe that within those bursty trains, there is a strong imbalance within a link with respect to who initiates the communication when voice calls are observed, while trains of SMSs are much more balanced. The origin of this burstiness in human behavior has been discussed in several papers in the last few years. It is expected, for example, that people will have more activity during the daytime than at night, and that some times of the day will represent peaks of activity [95]. Therefore, could the burstiness of phone calls only be due to the daily patterns present in our lives? Jo et al. studied this question and looked at how much of the burstiness of events still remained if they removed the circadian and weekly patterns that appear in a mobile phone dataset [96]. They dilated (contracted) the time of their dataset at times of high (low) activity. They observed that much of the burstiness remained after removing the circadian and weekly patterns, indicating that there is probably another cause of burstiness coming from the mechanisms of correlated patterns of human behavior. Another hypothesis was suggested by Barabási [89] who suggests that the burstiness comes from task prioritizing in human behavior: if an individual decides to always do first the highest priority task on their list, then the high-priority tasks will be executed soon after their arrival on the list, while lower-priority tasks will stay on the list for a much longer time, waiting until all higher priority tasks are executed. This process leads to a fat tailed distribution of waiting times, as was shown in [89]. Mobile phone networks are composed of complex patterns and interactions, but still only little work has been done yet in order to characterize these interactions. The temporal arrival and disappearance of more complex structures than simple edges and the timescales of human communication are only two examples of the wide possible research that still needs to be explored in this matter. Combining space and time - mobility Given their portability, mobile phones are trusty devices to record mobility traces of users. The availability of spatio-temporal information of mobile phone users has already led to a tremendous number of research projects, and potential applications (see Section 7) which would be too large to review exhaustively here. The increasing number of smartphone applications that offer services based on the geolocation of the user are a proof that this information still has a lot of potential uses that are yet to be discovered. In this section, we concentrate on the contributions that present new observations or methods for analyzing and modeling human mobility, while the contributions that propose new applications or uses of these methods are presented in Section 7. Individual mobility is far from random A mobility trace is represented as a sequence of cell phone towers at which a specific user has been recorded while making a phone call. By studying the traces of 100,000 mobile phone users over 6 months, González et al. found that human trajectories show a high degree of temporal and spatial regularity [97], as illustrated on Figure 14. This result contrasts with usual approximations of human motions by random walks or Lévy flights. Their main results showed that all users show very similar patterns of motion, up to a parameter defining their radius of gyration. The regularity is mainly due to the fact that users spend most of their time in a small number of locations. If rescaled and oriented following its principal axis, the mobility of all users can then be described by a single function. These findings are supported by an additional work produced by Song et al. [98], who identify significant differences between observational data and two typical models of human displacement: the continuous time random walk and the Lévy flight. Instead, the authors show that a model mixing the propensity of users to return to previously visited locations and a drift for exploration manages to reproduce characteristics present in their data but absent from traditional models. In their model, each time a user decides to change location, they can either choose a new location with a probability that decreases with the number of already visited locations (\(p_{\mathrm{new}} \propto S^{-\gamma}\), where S is the number of visited locations, and γ a constant), or they can return to a previously visited location. Despite the simplicity of this model they manage to explain the temporal growth of the number of distinct locations, the shape of the probability distribution of presence in each location, and the slowness of diffusion. Probability of finding a mobile phone user in a specific location. Probability density function \(\Phi(x,y)\) of finding a mobile phone user in location \((x,y)\). The plots, from left to right, were generated for users having a different radius of gyration. After rescaling based on the variance of each distribution, the resulting distribution show approximately the same shape. Figure reprinted by permission from Macmillan Publishers Ltd: Nature [97], copyright (2008) In another approach, Csáji et al. show how small the number of frequently visited locations is [99]. They define a frequently visited location of a user as a place where more than 5% of phone calls were initiated. Using a sample of 100,000 users randomly chosen in a dataset of communications of Portugal, the authors find that the average number of frequently visited locations is only 2.14, and that 95% of the users visit frequently less than 4 locations. Instead of making a list of frequently visited locations, Bagrow et al. propose another method to group frequently visited locations representing recurrent mobility into one 'habitat' [100]. The primary 'habitats' will therefore capture the typical daily mobility, and subsidiary 'habitats' will represent occasional travel. Interestingly, they show that the mobility within each habitat presents universal scaling patterns and that the radius of gyration of motion within a habitat is usually an order of magnitude smaller than that of the total mobility. However synchronized and predictable the mobility of most countries presented here seem to be, most of these studies are based on data from developed countries, where the cultural and lingual diversity do not play as big a role as in the developing world. Amini et al. analyze and quantify the differences between mobility patterns in Portugal and Ivory Coast, and show that models that perform well for developed countries can be challenged by the cultural and lingual diversity of Ivory Coast, that counts 60 distinct tribes [101]. They show, for example, that commuters in Ivory Coast tend to travel much longer distances than their counterparts in Portugal, and that mobility patterns vary much more across the country in Ivory Coast than in Portugal. If mobility traces are not random, and if users often return to their previous visited locations, could one state that human mobility could be predicted? Song et al. [102] addressed this question and investigated to what extent one could predict the subsequent location of a user based on the sequence of his previous visited locations. This predictability is given by the entropy rate of the sequence of locations at which the user is observed. Importantly, one has to point out that not only the frequency of visits at each location is taken into account, but also the temporal correlations between those visits. Their results show that the temporal correlations of the users' displacements reduces drastically the uncertainty on the presence of a mobile phone user, see Figure 15. Using Fano's inequality, they deduce that an appropriate algorithm could predict up to 93% of a user's location on average. The most surprising finding is that not only users are highly predictable on average, but this predictability remains constant across the whole population, whatever distance users are used to travel. While one would expect that people traveling often and far would be less predictable than those who stay in their neighborhood, Song's results seem to point out that there is no variation in predictability in the population. Entropy and predictability of the location of users. (left) Entropy rate of the location of users, for the real, uncorrelated and random data. (right) Maximal predictability of the location of users, for the real, uncorrelated and random data. From Song et al., Limits of predictability in human mobility, Science 327(5968):1018 (2010) [102]. Reprinted with permission from AAAS. While the aim of the previous work was to show how predictable human motion could be, the authors did not provide any prediction algorithm, keeping their contribution on the theoretical side. Calabrese et al. went a step further and proposed in [103] a predictive model for the location of people. Their algorithm is both based on the past trajectory of the targeted user and on a general drift of the collectivity, imposed by geographical features and points of interest. The prediction is then a weighted average between an individual behavior and a collective behavior. The individual behavior is modeled as a first-order approximation of the concept proposed by Song [102], building a Markov chain where states are locations visited by the user and the probability of moving from state i to state j is proportional to the number of times it has been observed in the data. The collective behavior is then modeled as a weighted average between the influence of distance, points of interest and land use. The predictions of their model on a sample of a dataset containing the records of 1 million people on 4 months shows that in 60% of their predictions, they manage to predict correctly the next location of a user. The Markov chain approach used by Calabrese et al. for modeling the individual behavior is also at the base of a study proposed by Park et al. [104]. They showed how the temporal evolution of the radius of gyration of a user can be explained by the eigenmode analysis of the transition matrix of the Markov chain. More precisely, the eigenvectors of the transition matrix provide fine-grain information on the traces of individuals. Instead of looking at the general mobility of people, Simini et al. focused on the modeling the commuting fluxes between cities, and introduced the radiation model [105], overcoming some of the limitations of the gravity model (recall Section 3). The radiation model is a stochastic model, assigning a person from a county i to a job of another county j with a probability depending on the estimated number of job opportunities close to the county of origin i. The estimated number of job opportunities in a given county is also a stochastic variable proportional to the total population of the county. If we name \(d_{ij}\) the distance between counties i and j, the average number of commuters between the two counties depends on the population of both counties (\(m_{i}\) and \(n_{j}\), respectively), and of \(s_{ij}\), representing the total population in a circle of radius \(d_{ij}\): $$ \langle T_{ij} \rangle= T_{i} \frac{m_{i}n_{j}}{(m_{i} + s_{ij})(m_{i}+n_{j}+s_{ij})}, $$ where \(T_{i}\) is the total number of commuters from county i. The radiation model, however efficient, still relies on the knowledge of the distribution of the population, which may be difficult to get in some areas such as the developing world. Overcoming this limitation, Palchykov et al. suggest a new model using only communication patterns [106]. The communication model supposes that the mobility between two places i and j is a function of the distance \(d_{ij}\) separating the two locations, and of the intensity of communication between these two locations, \(c_{ij}\): $$ T_{ij} = k \frac{c_{ij}}{d_{ij}^{\beta}}, $$ where k is a normalization constant. The authors find fitting values for the parameter β around 0.98 or 1.08 depending on whether they consider the mobility at intra- or inter-city level, respectively. As it appears, the massive amount of mobility data, which would on first view be considered as random motion, respects a strict routine. Mathematical models, prediction algorithms and visualization tools (see for example Martino's work [107]) have recently shed light on this routine, allowing to construct better human displacement models which can be used to predict epidemics outbreaks. At individual level, this routine appears to be strictly ruling our daily behavior, as Eagle and Pentland [108] show that six eigenvectors of the mobility patterns of users are sufficient to reconstruct 90% of the variance observed. They also observed that individuals tend to have synchronized behaviors, which will be described in the next paragraph. Aggregate mobility reveal synchronized behavior of populations At a higher level, those datasets allow to consider whole populations from a God-eye point of view. More practically, the availability of such massive data allows us first to observe and quantify the interaction of people with their environment, and second to quantify the synchronicity of those interactions. Initial projects, such as the Mobile Landscapes [109] project and Real Time Rome [110] have shed light on the potential of such an approach, contributions being essentially visual. However, the next step has been made by Reades et al. [111], who used tower signals as a digital signature of the neighborhood. They showed how similar locations presented similar signatures, which implies that a clustering of the urban space is possible, based on the phone usage recorded by its antennas. In particular, the obtained clusters reveal known segmentations of the town, such as residential areas, commercial areas, bars or parks. In short, such a technique may be used as a cheap census method on area usage, which could be of great interest to local authorities. Going a bit further, the same team showed how using an eigendecomposition [112] of the signatures of different locations in town it is possible to extract significant information on differences and similarities in space usage, see Figure 16 for the four principal eigenvectors of the signature of a weekday. With the same goal in mind, Csáji et al. [99] used a k-means clustering algorithm on the activity patterns of different areas to detect which places show the same weekly calling patterns, and thus identify which places typically correspond to work or home calling patterns (see Figure 17). Eigenvectors of the Erlang signature of a weekday. Four principal eigenvectors of the Erlang signature for a weekday of 7 places in Rome. While most of the variance is dominated by the principal eigenvector, representing the normal daily activity, the differences between other eigenvectors indicate differences in space usage. Figure reproduced from [112]. Weekly pattern of clusters. We observe clear differences between calling behavior of work and home locations. Figure reproduced from [99]. Addressing a closely related question, Karikoski and Soikkeli studied data collected from smartphones in the context of the OtaSizzle project at Aalto University, where users agreed to share their data [113]. The authors study whether different contexts trigger different usage patterns of smartphones. From the mobility traces of users, they classify places where the user is observed between: home, work, other meaningful, and elsewhere, the latter representing only pass-by places. They are able to show that depending on the context, users will have different usage patterns. For example, they show that voice calls are longer and more intensively used when people are at home, and that SMSs are more popular in the office context, where the voice calls are the shortest. In a paper studying the same dataset, Jo et al. study the contextual and temporal correlations between service usage and thus characterize typical usage patterns of smartphone services [114]. The authors further use k-means clustering to extract typical weekly behavior, and thus classify users between morning-type and evening-type usage patterns. Addressing a very closely related question, Trestian et al. show that the mobility and locations of people also influence the choice of smartphone applications they use [115]. Using a similar approach, Naboulsi et al. classify call profiles of snapshots of the network, corresponding to an aggregation of the traffic going through the network during a given time window [116]. They measure the similarity between two snapshots, comparing volumes and distribution of the traffic through the network. They further extract typical usage patterns, and propose a method to detect outlying behavior in the network. It is interesting to notice that even though the methods are very similar, this last approach is only based on antenna-to-antenna traffic, and not on individual behavior and mobility patterns, as were the previous studies. Beyond the analysis of a single city, Isaacman et al. explored behavioral differences between inhabitants of different cities [117]. By analyzing the mobility of hundreds of thousands of inhabitants of Los Angeles and New York City, they showed that Angelenos travel on average twice as far as New Yorkers. Finding an explanation for such a significant difference seems possible, if the inhomogeneities of population density and city surfaces are taken into account. See, for example the work of Noulas et al. [53], who show using Foursquare location data that using a rank-based distance, the differences between cities are leveled. A rank-based distance measures the distance between two places i and j as the number of potential opportunities (people, places of interest) being closer to i than j. Given the geographic distance \(r_{ij}\) and the density of opportunities expressed in radial coordinates and centered in i, \(p_{i}(r,\theta)\), such a distance reads $$ \operatorname{rank}(i,j) = \int_{0}^{2\pi} \int _{0}^{r_{ij}} p_{i}(r,\theta) r \,dr\, d\theta. $$ In a city of large population density, there will be more opportunities at short geographical distance than in a city with low population density. Hence, users are likely to travel over shorter distances in city of large population density. These distortions of the use of geographical distance are here leveled by the rank-based distance. In a recent study, Louail et al. suggest another way to formalize these differences and analyze the spatial structure of cities by detecting hot-spots or points of interest in 31 spanish metropolitan areas [118]. The authors show that the average distance between individuals evolves during the day, highlighting the spatial structure of the hot spots and the differences and similarities between different types of cities. They distinguish between cities that are monocentric where the spatial distribution is dependent on land use, and polycentric cities where spatial mixing between land uses is more important. In a similar approach, Trasarti et al. also analyze the correlations that arise in terms of co-variations of the local density of people, and uncover highly correlated temporal variations of population, at the city level but also at the country level [119]. If the detection of the hot-spots and places of interest in a city is possible, then is it possible to go one step further and infer the type of activity that people engage in, from looking at their mobility patterns ? Jiang et al. present a first approach to achieve this in [120], by first extracting and characterizing areas where people will stay or only pass-by, and then infer the type of activity that they engage in depending on the timing of their visit to certain specific locations. In many cases, modeling the mobility of users starts by creating an Origin-Destination matrix that represents how many people will travel between a specific pair of (origin, destination) locations within a given time frame [121–123]. After extracting which places and times of the day correspond to which activities, Alexander et al. propose a method to estimate OD-matrices depending on the time of the day and on the purpose of the trip. The authors' results extracted from data in the area of Boston, are surprisingly consistent with several travel survey sources. Extreme situation monitoring If the availability of data containing the time-stamped activity of a large population allows to perform monitoring of routine in population activities, it also enables to observe the population's collective response to emergencies. Many recent papers addressed this interesting question. Candia et al., for first, focused on the temporal activity of users at antennas [93]. They propose a method that is based on the study of the statistical fluctuations of individual users behaviors with respect to their average behavior. As shown on Figure 18, in an anomalous case, users show many high fluctuations from their average, while the overall average is close to that of a normal activity. The variance $$ \sigma(a,t,T) = \sqrt{\frac{1}{N-1}\sum _{i=1}^{N} \bigl( n_{i}(a,t,T)- \bigl\langle n(a,t,T) \bigr\rangle \bigr)^{2} } $$ is computed for each place a, for the time interval \([t,t+T]\) between the different individual behaviors \(n_{i}(a,t,T)\) and the average expected behavior. Comparing this variance with the normally expected variance allows to identify locations where users are acting abnormally, and that such locations are, in case of emergencies, spatially clustered. In cases of extreme emergencies, the response of populations can even be monitored as geographically and temporally located spikes of activity. Activity and fluctuations during anomalous events. Activity (top) and fluctuations (bottom) for a normal day (left) and an anomalous event (right). Note that even if no difference is observed on activity, fluctuations are significantly different. Figure reproduced from [93]. ©IOP Publishing. Reproduced by permission of IOP Publishing. All rights reserved. In a related paper, Bagrow et al. [124] analyzed the reaction of populations to different emergency situations, such as a bombing, a plane crash or an earthquake (Figure 19). They observed such spikes of information when eye witnesses and their neighbors reacted almost directly after the event. The reaction was mostly driven by calls made by nodes who don't usually call at that time, rather than an increase of call rate of usually active nodes. A detailed study of the paths followed by the information during its propagation shows the efficiency of the collective response, with 3 to 4 degrees from eye witnesses being contacted within minutes after the situation. Gao et al. further analyzed these dynamics in [125], and observed that the reciprocity of calls, i.e., 'call-back' actions, showed a sharp increase in emergency cases, such as a bombing or plane crash. The same kind of spikes of behavior, though with different characteristics, are also known to appear at large-scale events, such as concerts or demonstrations [125, 126]. Altshuler et al. have recently also introduced another method they call the social amplifier to detect anomalous behavior and thus detect emergencies [127]. Hubs of the network are nodes that have a very high degree, and are thus very well connected to the rest of the network, enabling them to amplify the diffusion of information through the social graph. Using those particular nodes as social amplifiers, the authors show that only analyzing the local behavior of nodes that are close to the hubs of the network can be efficient to detect anomalies of the whole network, and thus detect emergencies. This approach has the advantage that only keeping an eye on a limited fraction of the network is computationally much easier than monitoring and keeping updates on the whole network activity. Spikes of activity during emergency situations. The activity has been recorded for users close to the center of activity of several emergency situations, relatively to the normal activity. Figure reproduced from [124]. Further than detecting emergencies, Lu et al. studied whether the mobility of populations after a disaster could be predicted, analyzing as case study the mobility of populations before and after the 2010 Haiti earthquake [128]. Interestingly, the predictability of people's trajectories remained high and even increased in the three months following the earthquake. The authors also show that the destinations of people who left the capital were highly correlated with their previous mobility patterns, and thus that, with further research, mobile phone data could be used in the future to monitor extreme situations and predict the movements of populations after natural disasters. These results are very encouraging for many humanitarian organizations who are now trying to use Big Data to save lives. After the earthquake and the following tsunami that struck Japan in 2011, several research teams started a project together combining several big data sources, such as GPS devices, mobile phones, Twitter or Facebook to analyze how the analysis of this data could help save lives in the future, if natural disasters were to strike these regions again. Similar research has been conducted by Kryvasheyeu et al. analyzing Twitter data during and after the hurricane Sandy in 2012 measuring the performance of friendship links to raise awareness [129]. This area of research still needs to be explored, especially as so many data sources are now becoming available, combining datasets could prove very useful, and even life-saving for some people. Mobility and social ties The common availability of mobility traces and social interactions in the same dataset allows to address causality questions on the creation of social links. From the work of Calabrese et al. it appears that users who call each other have almost always physically met at least once over a one year interval [130]. Users call each other mostly right before or after physical co-location, and interestingly, the frequency of meetings between users is highly correlated with their frequency of calls as well as with the distance separating them. Going a step further, one may wonder if social ties could be predicted using mobility data. Wang et al. [131] showed that indeed, nodes that are not connected in the network, but topologically close, and who show similar mobility patterns are likely to create a link. By combining the mobility similarity and the topological distances in a decision-tree classifier, they manage to improve significantly classical link prediction algorithms, yielding in an average precision of 75% and a recall of 66%. Closely related, Eagle et al. showed on 4 years of data how the social network of people changes drastically when moving from one geographical environment to another [132]. On a related topic, Toole et al. measure the similarity between the mobility of users to classify social relationships and show how to contextualize social contacts using their mobility patterns [133]. The authors further present a mobility model, based on stochastic decisions to return to a previously visited place or to explore, and to base the choice on social influence or on individual preference. They show that this model achieves good accuracy in reproducing the similarity of mobility traces between social contacts. Dynamics on mobile phone networks Many networks represent a transport between nodes via their links. In mobile phone networks, the links transport either information (exchanged during phone calls or contained in messages) or non-voice exchanges (SMS, MMS). Information diffusion has opened questions on the speed of the diffusion or on the presence of super-spreaders, with applications in viral marketing or crowd management. The transmission of data has been at the centre of attention only recently, with the rise of new types of computer viruses running on smartphones. Information diffusion A phone call is associated to the transfer of information between caller and callee. However, as paradoxical as it may sound, mobile phone datasets are not appropriate to observe real propagations of information. The content of phone calls or text messages is, for evident privacy reasons, unknown. Yet, without having access to the content, it is impossible to decide for sure if an observed pattern of calls reflects the transmission of information or if it happens by chance. One can imagine a network with a number of indistinguishable balls circulating between the nodes. Each time a node receives a ball from one of its neighbors, it decides to keep it for a random time interval and after that to transmit it to one of its neighbors. Suppose now that one decides to track the movement of one specific ball. If the number of balls is small compared to the number of nodes, this can still be doable, as long as each node has maximum one ball in its possession. However, if the number of balls increases to become equivalent to the number of nodes, there is a high probability to confuse the paths of several balls. Add to this that balls might be added, removed or duplicated during the process, and one gets a similar situation as trying to track a piece of information in a mobile phone network. This artificial example reflects well the issue of tracking information. Peruani and Tabourier addressed this issue and showed that cascades of information, such as observed in mobile call graphs are statistically irrelevant, and correspond thus probably not to real propagations [134]. Tabourier et al. show in a further paper [135] that even though large cascades of information spreading don't seem to happen in mobile call graphs, local short chain-like patterns and closed loops seem to be the effects of some causality and could very well be related to information spreading. In a small number of cases, however, the actual observation of large diffusion of information might be possible. Studying the case of emergencies, such as a plane crash or a bombing, Bagrow et al. [124] observed an unusual activity in the geographical neighborhood of the catastrophe. In this case, the knowledge of both the temporal and spatial localization of an unexpected event that is likely to generate a cascade of information allows to assume that the observed sequences of calls are correlated for a specific reason. If, in most cases, the observation of real propagations seems an unreachable objective, a more complete research has been driven in the simulation of propagation of information on complex networks, which results have been extended to questions related to mobile phone networks. There are several ways of modeling information diffusion on networks. A simple way is used in [14] with an SI or SIR model where at each time step, infectious nodes try to infect their neighbors with a probability proportional to the link weight, which corresponds to a sequence of percolation processes on the network. However, mobile phone networks are known to have very particular dynamics (recall Section 4), which are not taken into account here. Miritello et al. [136] used a formalism similar to the one presented by Newman [137] for epidemics, to characterize the dynamical strength of a link, which can be used as link weight to map the dynamical process onto a static percolation problem. The dynamical strength, given an SIR model of recovery time T and probability of transmission λ, is given by $$ \mathcal{T}_{ij}[\lambda,T] = \sum_{n=0}^{\infty}P(w_{ij} = n;T)\bigl[1-(1-\lambda)^{n}\bigr], $$ which is the expected probability of having n calls between i and j in a time range of T multiplied by the probability of propagation given these n calls, summed over all possible values for n. Using an approximation of this expression, they manage to link the observed outbreaks to classical percolation theory tools. However, such a formalism still neglects the impact of temporal correlations between calls, which significantly slows down the transmission of information over a network. Social networks often exhibit small-world topologies, characterized by average shortest paths between pairs of nodes being very short compared to the size of the network [138]. However, Karsai et al. [90] used different randomization schemes to show that even though social networks have a typical small-world topology, the temporal sequence of events significantly slows down the spreading of information, as illustrated on Figure 20. Kivelä et al. [139] analyze this topic further, and introduce a measure they call the relay time, specific to each link, that represents the time it takes for a newly infected node to spread the information through that link. By analyzing several computations of this relay time, in randomized and empirical networks, they show that the bursty behavior of links, but also the broad distribution of link weights are the components that slow down the most the spreading dynamics in mobile phone networks. In another study, Karsai et al. [78] confirm this influence and show that neglecting the time-varying dynamics by aggregating temporal networks into their static counterparts introduces serious biases of several orders of magnitude in the time-scale and size of a spreading process unfolding on the network. Comparison of the speed of spreading processes using different randomization schemes. (left) Fraction of infected nodes as a function of time for the real (red) data and different randomization schemes. (right) Average prevalence time distribution for nodes. Reprinted figure with permission from Karsai et al., Phys Rev E, 83(2):025102, 2011 [90]. Copyright (2011) by the American Physical Society. http://dx.doi.org/10.1103/PhysRevE.83.025102 From a more theoretical point of view, diffusion processes can be seen as particular cases of dynamical systems. Liu et al. [140] questioned in this framework the controllability of complex networks. The problem was stated as follows; given a linear dynamical system with time-invariant dynamics $$ \frac{d\mathbf{x}(t)}{dt} = A\mathbf{x}(t) + B\mathbf{u}(t), $$ where \(\mathbf{x}(t) = (x_{1}(t),\dots,x_{N}(t))^{T}\) defines the state of the nodes of the network at time t, A is the (possibly weighted) adjacency matrix of the network, and B an input matrix, what is the minimal number of nodes needed for the input such that the state of each node is controllable, i.e., the system is entirely controllable? From control theory, one knows that a sufficient and necessary condition is that the reachability matrix \(C=(B,AB,A^{2}B,\dots ,A^{N-1}B)\) is of full rank. From previous work, it is known that the minimal number of nodes required is related to the maximal matching in the network, which can be computed with a reasonable complexity. For example, the authors show that in a mobile phone network, one needs to control about 20% of the nodes in order to achieve full controllability of the system. Surprisingly, most nodes needed for controlling the network are low-degree nodes, while hubs, that are commonly used as efficient spreaders, are under-represented in the set of input nodes. While the practical interest of this research still needs to be defined, this first result on controllability of networks might open new ideas in the field of information spreading. Finally, one may wonder if the patterns of phone usage are efficient in a collaborative scheme. Cebrian et al. [141] studied this with a small model, where each node of a mobile phone graph is represented as an agent assorted with a state represented by a binary string. The agents are all given the same function f, that takes their binary string as input and which is hard to optimize, and which computes their personal score. After each communication, the two communicating agents can modify their state in order to increase their personal score. This modification is done with a simple genetic algorithm, which simulates a cross-over of the states of both agents. Practically, suppose that two agents i and j are respectively in state \(\mathbf{x}_{i}^{(t)}\) and \(\mathbf{x}_{j}^{(t)}\) at time t. These states are both binary strings of length T. The agents choose a random integer c in the interval \([1,T]\) and both update their state as $$\begin{aligned}& \mathbf{x}_{i}^{(t+1)} = \arg\max_{x\in\{\mathbf{x}_{i}^{(t)},\mathbf {y}_{1},\mathbf{y}_{2}\}} f(x), \end{aligned}$$ $$\begin{aligned}& \mathbf{x}_{j}^{(t+1)} = \arg\max_{x\in\{\mathbf{x}_{j}^{(t)},\mathbf {y}_{1},\mathbf{y}_{2}\}} f(x) \end{aligned}$$ where \(\mathbf{y}_{1}\) is the vector with the c first entries of \(\mathbf{x}_{i}^{(t)}\) and the \(T-c\) last entries of \(\mathbf {x}_{j}^{(t)}\) and \(\mathbf{y}_{2}\) is the vector with the c first entries of \(\mathbf{x}_{j}^{(t)}\) and the \(T-c\) last entries of \(\mathbf {x}_{i}^{(t)}\). The authors observe with this model that the average score on all agents obtained in the real dataset is smaller than for a random topology, which is in line with similar known results from population genetics. Also, perturbation of the time sequence of calls produces a small enhancing of the global fitness. Mobile viruses The study of virus propagations has a long history, may it be biological viruses or more recently computer viruses. Wang et al. [142] studied a new kind of virus, which spreads over mobile phone networks. Their work is motivated by the increasing number of smartphones, which have high-level operating systems like computers, which leads to a higher risk of an outbreak. So far, despite the large number of known mobile viruses, no real outbreak has been noticed. The reason for this is that mobile viruses function only on the operating system for which they are designed for. An infected phone can hence only transfer the virus to its contacts running on the same operating system. As exposed by Wang et al. this situation corresponds to a site percolation procedure on the network of possible contacts. Given the actual market shares of the main operating systems, the authors showed that those were below the percolation transition of the contact network. The study concerns two types of spread available for viruses: the diffusion via Bluetooth and via Multimedia Messaging System (MMS). Both diffusions show major differences in spreading patterns; Bluetooth viruses spread relatively slow and depend on user mobility. In contrast, MMS epidemics spread extremely fast and can potentially reach the whole network in a short time, see Figure 21. However, currently they are contained in small parts of the network, due to the different operating systems. In conclusion, the authors deduce thus that if no outbreak has taken place so far, it is not due to the lack of efficient viruses, but it is rooted in the fragmentation of the call graph. However, the current evolution of the market leads to a situation where some operating systems are gaining a large market share, which could lead to a more risky situation. Propagation of a mobile virus, either via MMS or Bluetooth service, over the observed area. From Wang et al., Understanding the spreading patterns of mobile phone viruses, Science 324(5930):1071 (2009) [142]. Reprinted with permission from AAAS. In a subsequent study, Wang et al. [143] show how the scanning technique, where MMS malware generate random phone numbers to which they try to propagate instead of using the address book of their host, increases the probability of a major outbreak, even when the market share of operating systems are too low for having a giant component. Operators can detect such outbreaks by monitoring the MMS traffic of their network and observe suspicious increases of volume. However, given enough time, viruses can infect a large fraction of the network without being detected by operators. Smart anomaly detection schemes may prevent such outbreaks, as well as a reduction of market shares of operating systems. Wang et al. also compare the last two strategies in a further paper [144]. They study the effectiveness of topological viruses versus viruses that also use a scanning technique. The authors show that topological viruses, i.e., those that spread through the contact network of infected phones, are the most effective for an operating system that has a large market share, whereas the scanning technique will generate a bigger outbreak in the case of a low market share operating system. Applications in urban sensing, epidemics, development The last few years have seen the rise of Big Data and of its uses, and in many regards, this is rapidly changing our lives and way of thinking. Further than observing those networks of mobile phone calls, or modeling social behavior, many researchers now engage in finding new ways of using mobile phone data in everyday life. Urban sensing As showed in the previous sections, mobile phone data allows to observe and quantify human behavior as never before. Besides purely sociological questions, this data also opens a number of potential applications, which gives to this data an intrinsic economical value, thinking of geo-localized advertising applications [145]. Recalling that an increasing fraction of the available smartphone applications record the user's geolocation - whether it is necessary for the app to work or not - it is easy to understand that this information is valuable to target the right users when making advertising campaigns, or simply to understand the profile of the application's users. Mobile phones are more and more becoming a way of taking the pulse of a population, or the pulse of a city, and we expect that in the future, more and more cities will make development plans based on information gathered from mobile phone data. In this framework, recent research has shown that mobile phone data could detect where people are [47] and where people travel to [99] including the purpose of their trips [120]. If these findings are applied to a whole city and points of interest are uncovered via mobile phone data (recall Section 5), then the whole organization of urban places can be influenced by the knowledge gained from this data. Urban sensing is only shortly addressed here, but has been a popular topic in the last few years, and we refer the interested reader to a recent survey of contributions in this specific field [146]. We have previously addressed the possibility of using mobile phone signatures as a cheap census technique, Isaacman et al. take this analysis a step further and show how one can derive the carbon footprint emissions [147] based on the mobility observed from mobile phone activity. Many applications of modeling mobility aim towards transport planning and monitoring traffic with evident applications in accident management and traffic jam prevention. Over the last (almost) 20 years, a large number of attempts have been made to enhance prediction using mobile phone data. This topic is only shortly addressed here with a few recent contributions, but for more information on the research in this field, we will refer the interested reader to a review published in 2011 [148]. One example of such an application was proposed by Nanni et al., who create the OD-matrix of Ivory Coast and then assign this matrix to the road network [122] to produce a map (see Figure 22) modeling the traffic of the main roads of the country, showing estimated traffic flows. In a similar approach, Toole et al. estimate the flow of residents between each pair of intersections of a city's road map [149]. They show that these estimations, coupled with traffic assignment methods can help estimate congestion and detect local bottlenecks in the city. In a related study, Wang et al. examine in more details the usage patterns of road segments, and show that a road's usage depends on its topological properties in the road network, and that roads are usually used only by people living a small number of different locations [150]. The authors further show that taking advantage of this observation helps create better strategies for reducing travel time and congestion in the road network of a city. Traffic model for 24 hour period for Ivory Coast (left) and Abidjan Area (right). Figure reproduced from [122]. Going one step further, Berlingerio et al. designed an algorithm to detect which means of transport people would choose, including public transportation or private means, to infer how many people used which public transportation routes [121] throughout the day. The authors then proposed a model of the network of local transportation of Abidjan highlighting the routes that are taken most often. Then, they are able to show how specific little changes to the network could improve the average travel time of commuters by 10%. Among other possible uses of information on commuting flows, McInerney et al. suggested using the regular mobility of people for physical packages delivery to the most rural areas [151], showing on the one hand, the feasibility of this method, and on the other hand reducing by 83% the total delivery time for rural areas. Other applications of prediction algorithms for the next journey of users include, for example, a recommender system for bush taxis such as suggested by Gambs et al. [152], using the predicted next location of users to recommend to pedestrians adapted means of transport that are in their neighborhood. By monitoring the movements of people towards special planned events, Calabrese et al. [153] show that the type of events highly correlates to the neighborhood of origin of the users. Such a cartography of taste can be used by authorities when planning the congestion effects of large events, or for targeted advertising of events (see Quercia et al. [154]). In a closely related approach, Cloquet and Blondel use the analysis of anomalous behavior in mobile phone activity to predict the attendance to large-scale events such as demonstrations or concerts. The authors propose, as a first step in that direction, a method to determine the time when no more people will arrive to a certain event [155]. To do this, they propose two methods. The first method uses the mobility of people that are traveling towards the event to model the flux of the arriving or leaving crowd. The second method is based on the recorded interactions between people that are already at the event and other users that are within 20 km. The authors show that using these methods, they are able to predict the time when no more people will join the event up to 43 minutes in advance. Another related application was explored by Xavier et al. who analyzed the workload dynamics of a telecommunication operator before and after an event such as a soccer match [156] in order to help the management of mobile phone networks during such events. Finally, mobility traces can also be used to monitor temporal populations [157], such as tourists. Kuusik et al. [158] studied the mobility of roaming numbers in Estonia for 5 consecutive years, showing the potential for authorities to understand and efficiently target visiting tourists. In recent years, a lot of research has been done in order to use Big Data to help monitor and prevent epidemics of infectious diseases. If one can model information spreading in mobile phone networks (recall Section 6), then the same theory could also be used to model the spreading of real infectious diseases. As mobile phone data can help follow the movements of people (recall Section 5), these movements can also provide information about how a disease could travel and spread across a country. The dynamics at hand usually depend on the type of disease and how it can be transmitted, hence many articles, of which we will review a few here, propose different models based on the mobility of people to predict the spread of an epidemic. Using mobile phone traces, Wesolowski et al. measure the impact of human mobility on malaria, comparing the mobility of mobile phone users to the prevalence of malaria in different regions of Kenya, and identify the main importation routes that contribute to the spreading of malaria [159]. In another study, Tizzoni et al. [160] validate the use of mobile phone data as proxy for modeling epidemics. The authors extract a network of commuters in three European countries by detecting home and work locations for each mobile phone user, and compare this network with the numbers of commuters obtained by census. On these networks of commuters, they trace agent-based simulations of epidemics spreading across the country. They show that the invasion trees and spatio-temporal evolution of epidemics are similar in both census and mobile phone extracted networks of commuters (see Figure 23). Most models assume, lacking additional information, homogenous mixing between people that are physically within the same region or area. Frias-Martinez et al. propose another agent-based model of epidemic spreading, using individual mobility and social networks of individuals to build a more realistic model [161]. Instead of assuming homogeneous mixing within a given area, an individual will have more probability of meeting an infected agent that is in the same area if they have communicated with each other before. The authors further divide the social network of contacts and the mobility model of an individual between weekday and weekend to achieve better accuracy. Epidemic invasion trees. Invasion trees observed using the census (left) and the mobile phone network (right), the seed of the simulation is in Barcelonnette (black node). Figure reproduced from [160]. Going a step further, a few contributions to the D4D challenge [162] investigated which would be the best ways to monitor and influence an epidemic rather than just predicting its spread. In this framework, Kafsi et al. [163] propose a series of measures applicable at the individual level that could help limit the epidemic. They investigate the effect of three different recommendations, namely (1) do not cross community boundaries; (2) stay with your social circle and (3) go/stay home. Considering that either of these three recommendations could be sent via their mobile phone to different users in the network, and that probably only a fraction of the contacted users would participate, the authors evaluate the impact that implementing this system could have on the spreading process. They show that these measures can weaken the epidemic's intensity, delay its peak, and in some regions, even seriously limit the number of infected individuals. Using the same dataset, Lima et al. proposed a different approach [164], namely using the connection between people to launch an information campaign about the epidemic, in the hope to reduce the probability of infection if an individual is better informed about the risks. The authors use an SIR model and the observed mobility of mobile phone users to simulate epidemics unfolding on a population, and evaluate the impact of geographic quarantine on the spreading of the disease, as well as the impact of an information campaign reducing the risks of infection for 'aware' individuals. They show that the quarantine measures don't seem to delay the endemic state, even when almost half the population is limited to their own sub-prefecture, whereas the information campaign, less invasive, seems to limit significantly the final fraction of infected individuals, opening this topic for further research. This field of research has shown again how valuable mobile phone data could be to save lives, and potentially monitor and limit epidemics of infectious diseases. However, most models and studies are limited by the lack of ground-truth data to compare their results with. Indeed, how would you know who an individual got the disease from, and what was its exact route towards each infected person? Another shortcoming of this area of research comes from the current difficulty of gaining access to those mobile phone datasets, especially to cross-border mobility. If modeling mobility in Africa could be useful to containing the current Ebola outbreak, cross-border mobility would be very valuable data, as discussed in [165]. However, gaining access to these data is more difficult as it involves getting the approval from more than one country for a single dataset. In [166], the authors suggest guidelines to share data for humanitarian use, while preserving the privacy of users. Health and stress detection While infectious diseases are still a major cause of death in developing countries, the attention has slowly shifted, in more developed parts of the world, towards chronic diseases such as cardiovascular diseases or cancer, and their causes. Among one of the studied topics, daily stress in the work environment has become a major problem in the recent years. In this framework, Bogomolov et al. have conducted an experiment to find out whether daily stress levels could be predicted from non-invasive sensors, including mobile phone data [167, 168]. Using only one source of data resulted in poor predictive capacity. However, combining mobile phone data with features of personality traits and weather conditions, they produced a predictive model using 32-dimensional feature vectors to classify users between 'stressed' and 'not-stressed', achieving 72% accuracy. Interestingly, among the features extracted from mobile phone data that were selected as useful for the model, many were bluetooth proximity features. In 1970, Katz and Lazarsfeld introduced the breakthrough idea that, more than mass media, the neighborhood of an individual is influencing their decisions [169]. This idea has induced the concept of opinion leaders, that is, persons who have a high influence on their neighborhood, although some debate exists on the exact role played by opinion leaders [170], and introduced the concept of viral marketing. In opposition to direct marketing, the principle of viral marketing is that consumers respond better to information accessed from a friend than to information provided through direct means of communication. Viral marketing searches thus for means of making people communicate about a brand, in order to push friends of an early adopter to adopt the product in their turn. In particular, mobile viral marketing has proved to be an effective means of propagation of such marketing campaigns. The influence of one's neighbors can be observed using CDR data coupled to data on product adoption. In a study of the adoption of 4 mobile services, Szabó and Barabási [171] showed that the adoption of a product by a user was highly correlated to the adoption of their neighbors for some services only, while other services were not showing any viral attribute. A similar study by Hill et al. [172] on the adoption of an undisclosed technological service showed again that neighbors of nodes that had adopted the service were 3 to 5 times more likely to adopt the service than the best-practice selection of the company's marketing service. A related result was also obtained in the FunF project by Aharony et al. [173], who showed that the number of common installed applications was significantly larger for pairs of users having often physical encounters. Risselada et al. [174] further showed that the influence of one's neighbors on the adoption of a product evolved with time, depending on the elapsed time since the introduction of the product on the market. Even though one could use a simple SI or SIR model to characterize viral marketing, it is more likely in this case, that a user will adopt a product if several of its neighbors have already adopted it and the information comes from several different sources. One of the possible ways to model these dynamics is to use a threshold model: each user is assigned a threshold. A node will adopt a product if the proportion of its neighbors that have adopted the product is above the node's threshold. The model can be either deterministic, and decide a priori a same threshold for all nodes, or stochastic and draw thresholds from a probability distribution. To take into account the timing of contacts between people, one can then add to this model the condition that a node will adopt a product if it has enough contacts with different neighbors that have adopted the product within a given time frame. Backlund et al. have studied the effect of timings of call sequences on those models [175]. Here again, they observe that the burstiness of events tends to hinder propagation of adoption of a product, increasing the waiting times between contacts compared to a randomized sequence of contacts. The identification of 'good' spreaders for a viral marketing campaign is tough work, especially given the usually very large size of the datasets, which makes it hard to extract informational data in a small time frame. With this in mind, the authors of [176] proposed a local definition of social leaders, nodes that are expected to play an influential role on their neighborhood. They defined the social degree of a node as the number of triangles in which the node participates, and social leaders as nodes that have a higher social degree than their neighbors. This definition has its use in marketing campaigns, to identify the customers who should be contacted to start the campaign, which proved to be efficient [177]. Moreover, social leaders can also be used to reduce the complexity of a network, by only analyzing the network of social leaders instead of the whole network, with possible uses in visualization and community detection. Crime detection In criminal investigations, the police often request mobile phone records of suspected individuals for inspection, looking for evidence. The analysis of such data can not only reveal behavioral patterns of a single suspected individual, but also uncover potential criminal organizations through mobile relationships. Social networks analysis allows therefore to uncover the structure of criminal networks, but also to quantify the flow of information between its members. In this framework, a research group from the university of Messina propose a toolbox called 'LogAnalysis' to analyze CDRs and the associated social networks, with the aim of detecting criminal organizations [178, 179]. This toolbox allows to measure a series of metrics of the network and of the nodes, such as node centrality or clustering coefficient, and the tool further presents an analysis of the dynamics of the graph. The authors add visualization tools to the analysis, enabling forensic analysts to easily spot nodes that are more central, or visualize clusters and sub-clusters of tightly related individuals. The approach of this type of research is somewhat different from most studies presented in the above paragraphs on CDR datasets, in that it is not based on studying anonymized datasets and extracting information on the behavior of a population, but rather on studying the network around a specific individual or a specific group of suspects whose identity is clearly known by the forensic analyst carrying out the investigation. Using a different approach, Bogomolov et al. use indicators derived from mobile phone traces to predict whether a certain area will be a crime hot spot in the next month [180]. Using dynamically updated features such as the estimated number of people in each area, or the age, gender and work/home/visitor group splits derived from mobile phone data, their model achieves almost 70% accuracy in predicting whether a given area is at risk of being the scene of a crime in the next month. This type research can therefore be used by the police to achieve a better response time, or direct their attention towards the places that are the most likely to require an intervention. The last couple of years have seen a spectacular rise of interest for applications of mobile data for the purpose of helping towards development. Many contributions to the 'Data for Development' (or D4D) challenge launched by Orange [162] used different bits of information from the data of mobile phone users to help the development of Ivory Coast. Several of these contributions have already been reviewed in the previous paragraphs, for the full set of research projects, see [181]. While in the developed world, much information of what can be inferred from mobile phone data is already known (population density, some of the mobility traces, …), this information can be very valuable in the developing world where census data is often unavailable or several years old. Modeling the mobility of people in developing countries can provide very useful information for local governments when making decisions regarding changes in local transportation networks, or urban planning. Indeed, in rural areas of low income countries where the most recent technologies are not always available, up to date information on how many people commute from one place to another can be very useful and help policy makers to decide on the next steps towards development. Sometimes, very basic information such as drawing the road network can be difficult in remote places. Salnikov et al. used the D4D challenge dataset to detect high traffic roads by selecting displacements only within a certain range of velocities [182]. They were able to redraw the main road structure of the country and even identified unknown roads, which they validated a posteriori. Between techniques for cheap census, mobility planning and fighting infectious diseases applications, we expect that in the next few years, the developing world will profit from the availability of such rich databases, and research will provide useful insights into how to better help towards development. Data representativity Finally, one may raise the question of the significance of the data: given that only a fraction of a country's population is reached by one operator, to which extent may the results on a dataset be generalized to larger populations? Clearly, quantitative results obtained in these studies, such as the degrees of nodes, cannot be taken for granted, but one may expect that as long as the population sample is not biased, qualitative observations such as the broadness of degree distribution or the organization of nodes in communities are significant information on the structure of communication networks. However, the question of knowing whether the sample is biased or not is almost impossible, especially given the lack of information about the users in CDR databases. Frias-Martinez et al. raised this question in [183], regarding e.g. the socio-economic level that could be biased among mobile phone users compared to the whole population. They validate their results by performing a series of statistical tests to compare the population in their sample to the overall population using census data, and show that no significant difference was observed. However, in the general case, data about users in CDR databases is often missing, and census data may not always be available for comparison. Regarding mobility models, one could argue that active mobile phone users are more likely to be on the move than the rest of the population. A mobility model based on mobile phone users is therefore likely to overestimate the number of people within a population that are traveling. Buckee et al. raised this question regarding those models, further arguing that bias in models of mobility could, in turn, influence the spreading of modeled epidemics [184]. Onnela et al. also address this problem studying how paths differ depending how much of the network is observed [185]. They show that, counterintuitively, paths in partially observed networks may appear shorter than they actually are in the underlying full network. Ranjan et al. studied a related question regarding the mobility of users [186]: given that one only sees data points where and when a user has made a phone call, to which extent are these points representative of a user's mobility? They found that sampling only voice calls of an individual will most of the time do well to uncover locations such as home and work, but will also, in some cases, incur biases in the spatio-temporal behavior of the user. In a recent study, Stopczyncki et al. widen their coverage by coupling databases from many sources on the same set of users in the context of the Sensible DTU project [7]. While this approach clearly captures more than just studying mobile phone records, its coverage is limited (1,000 subjects) as the users had to give their explicit consent to share their data: facebook interactions, face-to-face encounters, and answers to a survey. The authors are therefore able to analyze a bigger picture than other studies based on only mobile phone data and show that only studying mobile phone data may not be enough to capture a user's comprehensive profile. Learning from these studies, one should therefore be cautious when drawing conclusions from such analyses, and keep in mind that observing the traces left by mobile phones is only observing selected parts of the whole picture. The collection and availability of personal behavioral data such as phone calls or mobility patterns raises evident questions on the security of users'privacy. The content of phone calls or text messages is not recorded, but even the simple knowledge of communication patterns between individuals or their mobility traces contains highly personal information that one typically does not want to be disclosed. During the past decade, a fairly high amount of personal data was made available to researchers via, among others, CDR datasets. The companies sharing their data do not always know how much personal information can be inferred from the analysis of such large datasets, and this has led, so far in other cases than mobile phone data, to a few scandals in the recent years [187, 188]. In turn, these incidents led, in 2012, to a procedure of adaptation of legal measures in Europe [189]: the previous european law on the protection of privacy and data sharing dated back from 1995 [190], long before the era of what is now called 'Big Data'. Moreover, the use of data has become global, and an organization based in a specific country uses data generated by its users from all over the world, hence the need for similar regulations in different countries. So far, this has not yet been achieved, as legislations in different parts of the world are very different from one another. In the USA, for example, there is no specific law regulating the data protection and privacy, but instead laws are specific on a sector-by-sector basis. Data protection in the finance or health sectors are therefore regulated by separate authorities [191]. In Europe, on the contrary, the new directive is designed to apply everywhere in Europe, to the people and organizations who collect and manage personal data [192]. The procedure often used when a company shares private data with a third party such as a research group is the following: the company keeps on secured machines the exact private information such as names, addresses or phone numbers on their customers, as well as the CDRs, which contain the phone number of the caller, the callee, the time stamp of the call, the tower at which the caller was connected, idem for the callee, and additional information such as special service usage and so on. The anonymization procedure consists then in replacing each phone number by a randomly generated number, such that each user has a unique random ID, from which it is impossible to retrieve the original phone number by reverse engineering procedures. The CDRs are then modified such that phone numbers are replaced by the corresponding ID. After this procedure, the CDRs are anonymized, and can be transferred to a third party. The standard procedure then implies that the third party signs a non-disclosure agreement, stipulating that they cannot make the CDR data available, and the agreement usually also restricts the range of potential research questions to be explored with the data. The safety of users privacy is then guaranteed both by the removal of information allowing to identify users and by the assumption that the third party doesn't make use of the data for any malicious intent. De-anonymization attacks Some research has been produced on mobile phone datasets to challenge this apparent feeling of security, however, recent results are opening new ways of considering the privacy problem. Using CDR data containing mobility traces, Zang and Bolot [193] show how it is possible to uniquely identify a large fraction of users with a small number of preferred locations. Their methodology goes as follows: for each user, it is possible to list the top N locations at which calls have been recorded. The authors show then that depending on the granularity of the locations, a non-negligible fraction of users may be uniquely identified by only 2 locations. For example, if locations are taken at cell level, up to 35% of the users of a 25 million communication network can be uniquely identified with 2 locations, which will be likely to correspond to home and work. Thus, while the anonymization procedure is intended to impeach any linkage between the dataset and individuals, using this procedure allows to potentially retrieve the mobility and calling pattern of targeted users given the access to as little information as home and work addresses. If additional data, such as year of birth or gender of users would be available - which is common in most datasets - it would be possible to identify very large fractions of the network. However, in this attack scheme, one has to know quite well the profile of the user for them to be found in the database. Using a different approach, de Montjoye et al. [194] show that knowing only four points in space and time where a user was allows to uniquely re-identify the user with 95% probability. Using only very little information that could be available easily to an attacker, the authors thus show how unique each user's trajectory is. They further show that blurring the resolution of space or time does not reduce much the information needed to re-identify a user in the database, thus keeping the database very vulnerable if faced with this type of de-anonymization attack. Other possible attacks have also been considered on anonymized online social networks. Although those attacks are not likely to be applied in the case of mobile phone data, we quickly mention some of them, as it is likely that breaches found in different applications might be similar to potential breaches in mobile phone datasets. For example, Backstrom et al. [195] describe a family of local attacks, which enable to retrieve the position of some targets in the network, and hence to uncover the connections between those patterns. The authors showed that on a network of 4.4 million nodes, by controlling the links of 7 dummy nodes they manage to uncover the presence or absence of 2,400 links between 70 target nodes, without being detected by the database manager. On a wider scale, Narayannan and Shmatikov [196] show that it is possible to retrieve the identity of a large part of a social network by combining it with an auxiliary network. Such a situation happens when users are present in two separate datasets. The authors show then that even if this overlap is available for only a fraction of the users, it is still possible to retrieve the information for a large part of the network. Add to this that other types of databases may be available (for example Twitter, or Facebook in addition to CDRs), and the possibility of de-anonymization is even greater. Such situations have already led to problematic situations, where specific people were re-identified in supposedly anonymized medical records, or movie preferences databases [187, 188, 197]. Indeed, two separate databases coming from different sources may be anonymized and safe to be released separately, but can still present a great danger for privacy if an attacker combines and crosses the information contained in both databases. Against these possible threats of privacy breach, one may wonder if solutions are proposed to counter such attacks. If research on mobile datasets only considers average behaviors, rather than exact patterns, a simple countermeasure is to perform small modifications of the dataset, that would not alter the general aspect of it but that would have dramatic consequences on the algorithms used by attackers, who search for exact matchings between statistics on the network and a priori known properties of the targets. Another protection against such attacks, and particularly when mobility data is involved, is to produce new random identifiers for each user at regular time intervals. By regenerating random identifiers, it makes it impossible to use longitudinal information in order to assess the preferred locations of a user. As shown by Zang and Bolot [193], by changing every day the ID of each user, only 3% of the nodes can still be identified using their top 2 locations. While this method seems efficient to protect the privacy of users, it reduces substantially the possible information to retrieve from such a dataset for research purposes. Using a similar approach also proved useful against the attack scheme considered by de Montjoye et al., as Song et al. show in [198] that changing the ID of each user every six hours reduces substantially the fraction of unique trajectories in the dataset. A compromise between preserving the anonymity and keeping enough information in the dataset is difficult to achieve. In collaboration with the Université catholique de Louvain, the provider Orange tried to achieve this for their first D4D challenge before releasing a dataset to a wide community of researchers (more than 150 research teams participated). Through releasing four different datasets anonymized differently [162] and containing information of different spatio-temporal resolutions, they could guarantee the preservation of the anonymity of users. Yet, the loss of information was not too dramatic, as many studies showed very good results using the provided aggregated information. The challenge was a success and a second one followed in 2014-2015, using a wider dataset from Senegal [3]. Other similar initiatives include the Telecom Italia Big Data Challenges 2014 and 2015 [4], whose goal is to show the variety of applications that can be derived from the use of Big Data, including mobile phone data, but also weather, Twitter, public transport, energy…. These data are aggregated and anonymized and are therefore made openly available online to all who wish to analyze them [199]. Another question that is closely linked to this research is how to quantify the anonymity of a database. Latanya Sweeney proposed a measure that is k-anonymity [200], defining that a database achieves k-anonymity if for any tuples of previously defined entries of the database, there are at least k users corresponding to it, making it impossible to re-identify a single user with only information on these entries of the database. Of course, the larger k is, the most difficult it becomes to achieve this, especially in a CDR database containing spatio-temporal information about each call. Moreover, when the attacker is looking for a particular person in the database, enabling him to reduce the number of potential corresponding users to a small number is sometimes already a lot of information, and too big a risk to release the database publicly. Another potential solution to preserving privacy was suggested by Isaacman et al. [201] who suggest using synthetic data to model the mobility of people. They used mobile phone data from two american cities to validate their model, showing that their model, based on only aggregated data and probability distributions, could reproduce many of the features of mobility of users, without any of them corresponding to a real person. Mir et al. further proposed an evolved version called DP-WHERE [202] of the previous model, adding controlled noise to the set of empirical probability distributions. This noise then guarantees that the model achieves differential privacy, that is, that the analyses will not be significantly different whether or not a single individual is in the database from which the model is derived, even if this individual has an unusual behavior. However, on may wonder if these synthetic data could be used to carry out analyses that were not previously tested on the real database, as no guarantee exists on the outcome of analyses that were not foreseen by the researchers that tested the model for compatibility with empirical data. Personal data: ownership, usage, privacy Phone companies collect data about their users, about their habits, their mobility, their acquaintances. Still, the legislation up to 2013 was fuzzy [203], chilling companies to share such data for research and making customers feel that George Orwell's predictions are coming true, especially after the scandal in 2013 revealing how much personal information the NSA was collecting from many sources [204]. Such data represents an enormous added value, both to companies, for marketing purposes and client screening, and to authorities for traffic management or epidemic outbreak prevention. It is often forgotten, but the use of mobile phone datasets also has a huge positive potential in the developing world, as many of the proposed project to the Data for Development challenge showed [181], may it be for supervising the health status of populations, generating census data or optimizing public transport. Ironically, even though the research community has shown the potential that such data have to save lives and that using these data is technically possible, it is still often difficult to access the data because of privacy concerns, even when the data is aggregated and non-disclosure agreements are signed. Such opportunities, both for corporates and authorities need to develop standardized procedures for the acquisition, conservation and usage of personal data, which is not yet the case. The communication about these procedures to customers hasn't been clear, as are the possibilities for a user to 'opt-out' if they don't want to have their personal data released. With this intent, several voices have recently been raised in order to urge authorities to develop a 'New Deal' [205] on data ownership, in which users would own their personal data as well as the decisions to provide it - in exchange of payment - to companies interested in their usage. A transparent system armed with the necessary protocols and regulation for a transparent use of personal data would also facilitate the access to data for researchers [206], and could so benefit to the entire society. Conclusion and research questions The first analyses of mobile phone datasets appeared in the late 90's, and the result of this decade of research contains a large number of surprises and several promising directions for the future. In this paper, we have reviewed the most prominent results obtained so far, in particular in the analysis of the structure of our social networks, and human mobility. We decided not to cover some closely related questions, such as churn prediction (see [207–210]) or dynamic pricing [16, 211], which are rather business-related topics, and for which a vast literature is available. The recent availability of mobile phone datasets have led to many discoveries on human behavior. We are not all similar in our ways of communicating, and differences between users can range to several orders of magnitudes. Our networks are clustered in well-structured groups, which are spatially well-located. With the raise of communication technology, some have predicted that the barrier of distance would fall, shrinking the world into a small village. However, mobile phone data suggests instead that distance still plays a role, but that its impact is nuanced by the varying population density. Regarding our mobility behavior, individuals appear to have highly predictable movements [212], while as populations we act and react in a remarkable synchronized way. In this context, the availability of mobile phone data has for the first time allowed to observe populations from a God-eye point of view, monitoring the pace of daily life or the response to catastrophes. The ubiquity of mobile phones - there are nowadays more mobile phones than personal computers in use - which allows us to obtain such precise results raises also the thread of viral outbreaks, from which mobile phones have been safe until now. Mobile viruses could be a potential risk for users' privacy, as it is also the case that the anonymized datasets provided by operators to third parties for research could potentially be de-anonymized too. The availability of such enormous datasets creates a huge potential that could benefit to society, up to the point of saving lives. The research that has been conducted so far only represents the tip of the iceberg of what could potentially be done, when adequately exploited. However, it is the necessity of authorities to ensure that such datasets could not be misused. The number of possible research questions on mobile phone datasets is gigantic. In this last part, we will present one research direction that we believe to be highly important and still not addressed in its most general form. A large number of research has been conducted on the analysis of social networks, based on CDRs. As it appears from the different publications on this topic, there exist some common features but also many differences in the structure of the constructed network. Recall as simplest example the degree distributions, which show different functional forms for most datasets. These differences may, of course, be linked to cultural differences between the different countries of interest, but there are probably other, quantifiable, reasons. The datasets differ greatly in the market shares of the operators, in the time span of the data collection period, in the size of the network and in the geographical span of the considered country. The method of network construction is also always different and has a tangible impact on the network structure. The use of directed or undirected links, weights and thresholds for removing low-intensity or non-mutual links all greatly impact the structure and hence the statistical features of the obtained network. Hence, we believe that a serious analysis, both on theoretical and on empirical side of the influence of these factors on the general structure of mobile phone networks may lead to a general framework, allowing to interpret differences between results obtained on several datasets with the knowledge of potential side-effects. This question is closely related to the even more general question of the significance of information provided by CDR data. Recalling what was said in Section 2, CDR datasets are noisy data, some links appear there by chance, while other have not been captured in the dataset. It would thus be interesting to question the stability of the obtained results, provided that the real network is different from what has been observed in the data. This links with the work of Gourab [213], who analyzed the stability of PageRank under random noise on the network structure. Again, in this framework, no real theoretical result has yet been achieved, allowing to characterize which results are significant, and which are not. The world in 2014: ICT facts and figures. International Telecommunication Union. http://www.itu.int/ Kwok R (2009) Personal technology: phoning in data. Nature 458(7241):959-961 de Montjoye YA, Smoreda Z, Trinquart R, Ziemlicki C, Blondel VD (2014) D4D-Senegal: the second mobile phone data for development challenge. ArXiv preprint arXiv:1407.4885 Telecom Italia big data challenge. http://www.telecomitalia.com/tit/en/bigdatachallenge/contest.html Eagle N, Pentland A (2006) Reality mining: sensing complex social systems. Pers Ubiquitous Comput 10(4):255-268 Karikoski J, Nelimarkka M (2010) Measuring social relations: case otasizzle. In: IEEE second international conference on social computing (SocialCom). IEEE Press, New York, pp 257-263 Stopczynski A, Sekara V, Sapiezynski P, Cuttone A, Madsen MM, Larsen JE, Lehmann S (2014) Measuring large-scale social networks with high resolution. PLoS ONE 9(4):e95978 Zipf GK (1949) Human behavior and the principle of least effort: an introduction to human ecology. Addison-Wesley Press, Reading Cortes C, Pregibon D, Volinsky C (2001) Communities of interest. In: Hoffman F et al. (eds) Advances in intelligent data analysis. LNCS, vol 2189. Springer, Berlin, pp 105-114 Krings G (2012) Extraction of information from large networks. PhD thesis, Université catholique de Louvain Abello J, Pardalos PM, Resende MGC (1999) On maximum clique problems in very large graphs. In: External memory algorithms. DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol 50. Am Math Soc, Providence, pp 119-130 Aiello W, Chung F, Lu L (2000) A random graph model for massive graphs. In: Proceedings of the thirty-second annual ACM symposium on theory of computing. ACM, New York, pp 171-180 Lambiotte R, Blondel VD, de Kerchove C, Huens E, Prieur C, Smoreda X, Van Dooren P (2008) Geographical dispersal of mobile communication networks. Physica A 387(21):5317-5325 Onnela JP, Saramaki J, Hyvonen J, Szabo G, Lazer D, Kaski K, Kertesz J, Barabasi AL (2007) Structure and tie strengths in mobile communication networks. Proc Natl Acad Sci USA 104(18):7332-7336 Li M-X, Palchykov V, Jiang Z-Q, Kaski K, Kertész J, Miccichè S, Tumminello M, Zhou W-X, Mantegna RN (2014) Statistically validated mobile communication networks: evolution of motifs in European and Chinese data. New J Phys 16:083038. doi:10.1088/1367-2630/16/8/083038 Kim Y, Telang R, Vogt WB, Krishnan R (2010) An empirical analysis of mobile voice service and SMS: a structural model. Manag Sci 56(2):234-252 Kovanen L, Saram J, Kaski K (2011) Reciprocity of mobile phone calls. JDySES 2(2):138-151 Ling R, Bertel TF, Sundsøy PR (2012) The socio-demographics of texting: an analysis of traffic data. New Media Soc 14(2):281-298. doi:10.1177/1461444811412711 Nanavati AA, Gurumurthy S, Das G, Chakraborty D, Dasgupta K, Mukherjea S, Joshi A (2006) On the structural properties of massive telecom call graphs: findings and implications. In: Proceedings of the 15th ACM international conference on information and knowledge management. ACM, New York, pp 435-444 Barabási AL (2009) Scale-free networks: a decade and beyond. Science 325(5939):412-413 MathSciNet Google Scholar Watts DJ, Strogatz SH (1998) Collective dynamics of small-world networks. Nature 393(6684):440-442 Clauset A, Shalizi CR, Newman ME (2009) Power-law distributions in empirical data. SIAM Rev 51(4):661-703 Seshadri M, Machiraju S, Sridharan A, Bolot J, Faloutsos C, Leskovec J (2008) Mobile call graphs: beyond power-law and lognormal distributions. In: Proceeding of the 14th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 596-604 Krings G, Karsai M, Bernhardsson S, Blondel VD, Saramäki J (2012) Effects of time window size and placement on the structure of an aggregated communication network. EPJ Data Sci 1:4 Onnela JP, Saramaki J, Hyvonen J, Szabo G, de Menezes MA, Kaski K, Barabasi AL, Kertesz J (2007) Analysis of a large-scale weighted network of one-to-one human communication. New J Phys 9:179 Granovetter MS (1973) The strength of weak ties. Am J Sociol 78:1360-1380 Onnela J-P, Saramäki J, Kertész J, Kaski K (2005) Intensity and coherence of motifs in weighted complex networks. Phys Rev E 71(6):065103 Du N, Faloutsos C, Wang B, Akoglu L (2009) Large human communication networks: patterns and a utility-driven generator. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 269-278 Kianmehr K, Alhajj R (2009) Calling communities analysis and identification using machine learning techniques. Expert Syst Appl 36(3):6218-6226 Zhang H, Dantu R (2008) Discovery of social groups using call detail records. In: On the move to meaningful Internet systems: OTM 2008 workshops. Springer, Berlin, pp 489-498 Tibély G, Kovanen L, Karsai M, Kaski K, Kertész J, Saramäki J (2011) Communities and beyond: mesoscopic analysis of a large social network with complementary methods. Phys Rev E 83(5):056125 Blondel VD, Guillaume JL, Lambiotte R, Mech ELJS (2008) Fast unfolding of communities in large networks. J Stat Mech 2008:P10008 Palla G, Barabási AL, Vicsek T (2007) Quantifying social group evolution. Nature 446(7136):664-667 Ahn YY, Bagrow JP, Lehmann S (2010) Link communities reveal multiscale complexity in networks. Nature 466(7307):761-764 Lazer D, Pentland A, Adamic L, Aral S, Barabási A-L, Brewer D, Christakis N, Contractor N, Fowler J, Gutmann M, Jebara T, King G, Macy M, Roy D, Van Alstyne M (2009) Computational social science. Science 323(5915):721-723. http://www.sciencemag.org/content/323/5915/721.full.pdf. doi:10.1126/science.1167742 Eagle N, Pentland AS, Lazer D (2009) Inferring friendship network structure by using mobile phone data. Proc Natl Acad Sci USA 106(36):15274-15278 Wiese J, Min J-K, Hong JI, Zimmerman J (2015) 'You never call, you never write': call and SMS logs do not always indicate tie strength. In: Proceedings of the 2015 conference on computer supported cooperative work - CSCW'15. ACM, New York, pp 765-774 Blumenstock JE, Gillick D, Eagle N (2010) Who's calling? Demographics of mobile phone use in Rwanda. Transportation 32:2-5 Smoreda Z, Licoppe C (2000) Gender-specific use of the domestic telephone. Soc Psychol Q 63(3):238-252 Kovanen L, Kaski K, Kertész J, Saramäki J (2013) Temporal motifs reveal homophily, gender-specific patterns, and group talk in call sequences. Proc Natl Acad Sci USA 110(45):18070-18075 Frias-Martinez V, Frias-Martinez E, Oliver N (2010) A gender-centric analysis of calling behavior in a developing economy using call detail records. In: AAAI spring symposium: artificial intelligence for development Blumenstock JE, Eagle N (2012) Divided we call: disparities in access and use of mobile phones in Rwanda. Inf Technol Int Dev 8(2):1-16 Chawla NV, Hachen D, Lizardo O, Toroczkai Z, Strathman A, Wang C (2011) Weighted reciprocity in human communication networks. Technical report arXiv:1108.2822 Motahari S, Mengshoel OJ, Reuther P, Appala S, Zoia L, Shah J (2012) The impact of social affinity on phone calling patterns: categorizing social ties from call data records. In: The 6th SNA-KDD workshop '12 Barthélemy M (2011) Spatial networks. Phys Rep 499(1):1-101 Sterly H, Hennig B, Dongo K (2013) 'Calling abidjan' - improving population estimations with mobile communication data. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 108-114 Krings G, Calabrese F, Ratti C, Blondel VD (2009) Urban gravity: a model for inter-city telecommunication flows. J Stat Mech Theory Exp 2009:07003 Krings G, Calabrese F, Ratti C, Blondel VD (2009) Scaling behaviors in the communication network between cities. In: International conference on computational science and engineering. IEEE Press, New York, pp 936-939 Onnela JP, Arbesman S, González MC, Barabási AL, Christakis NA (2011) Geographic constraints on social network groups. PLoS ONE 6(4):16939 Bucicovschi O, Douglass RW, Meyer DA, Ram M, Rideout D, Song D (2013) Analyzing social divisions using cell phone data. In: D4D book: mobile phone data for development. Analysis of mobile phone datasets for the development of Ivory Coast, pp 42-54 Noulas A, Scellato S, Lambiotte R, Pontil M, Mascolo C (2012) A tale of many cities: universal patterns in human urban mobility. PLoS ONE 7(5):37027 Liben-Nowell D, Novak J, Kumar R, Raghavan P, Tomkins A (2005) Geographic routing in social networks. Proc Natl Acad Sci USA 102(33):11623-11628 Carolan E, McLoone SC, McLoone SF, Farrell R (2012) Analysing Ireland's interurban communication network using call data records. In: IET Irish signals and systems conference (ISSC 2012). IET, Stevenage, pp 1-6 Schläpfer M, Bettencourt L, Grauwin S, Raschke M, Claxton R, Smoreda Z, West GB, Ratti C (2014) The scaling of human interactions with city size. J R Soc Interface 11:20130789 Jo H-H, Saramäki J, Dunbar RI, Kaski K (2014) Spatial patterns of close relationships across the lifespan. Sci Rep 4:6988 Herrera-Yagüe C, Schneider CM, Smoreda Z, Couronné T, Zufiria PJ, González MC (2014) The elliptic model for communication fluxes. J Stat Mech Theory Exp 2014(4):04022 Grady D, Brune R, Thiemann C, Theis F, Brockmann D (2012) Modularity maximization and tree clustering: novel ways to determine effective geographic borders. In: Handbook of optimization in complex networks. Springer, Berlin, pp 169-208 Blondel VD, Deville P, Morlot F, Smoreda Z, Van Dooren P, Ziemlicki C (2011) Voice on the border: do cellphones redraw the maps? ParisTech Review Blondel V, Krings G, Thomas I (2010) Regions and borders of mobile telephony in Belgium and in the Brussels metropolitan zone. Bruss Stud 42(4):1-12 Expert P, Evans TS, Blondel VD, Lambiotte R (2011) Uncovering space-independent communities in spatial networks. Proc Natl Acad Sci USA 108(19):7663-7668 MATH Google Scholar Ratti C, Sobolevsky S, Calabrese F, Andris C, Reades J, Martino M, Claxton R, Strogatz SH (2010) Redrawing the map of Great Britain from a network of human interactions. PLoS ONE 5(12):14248 Blumenstock JE, Fratamico L (2013) Social and spatial ethnic segregation: a framework for analyzing segregation with large-scale spatial network data. In: Proceedings of the 4th annual symposium on computing for development. ACM DEV-4 '13. ACM, New York, article no 11 Mao H, Shuai X, Ahn YY, Bollen J (2013) Mobile communications reveal the regional economy in Côte d'Ivoire. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 17-34 Smith-Clarke C, Mashhadi A, Capra L (2014) Poverty on the cheap: estimating poverty maps using aggregated mobile communication networks. In: Proceedings of the 32nd annual ACM conference on human factors in computing systems. ACM, New York, pp 511-520 Frias-Martinez V, Virseda J, Frias-Martinez E (2010) Socio-economic levels and human mobility. In: Qual meets quant workshop - QMQ Frias-Martinez V, Soguero-Ruiz C, Frias-Martinez E, Josephidou M, (2013) Forecasting socioeconomic trends with cell phone records. In: Proceedings of the 3rd ACM symposium on computing for development. ACM, New York, article no 15 Gutierrez T, Krings G, Blondel VD (2013) Evaluating socio-economic state of a country analyzing airtime credit and mobile phone datasets. ArXiv preprint arXiv:1309.4496 Holme P, Saramäki J (2012) Temporal networks. Phys Rep 519(3):97-125 Huang Z, Lin DKJ (2009) The time-series link prediction problem with applications in communication surveillance. INFORMS J Comput 21(2):286-303. doi:10.1287/ijoc.1080.0292 Yu K, Chu W, Yu S, Tresp V, Xu Z (2006) Stochastic relational models for discriminative link prediction. In: Advances in neural information processing systems, pp 1553-1560 Hidalgo CA, Rodriguez-Sickert C (2008) The dynamics of a mobile phone network. Physica A 387(12):3017-3024 Raeder T, Lizardo O, Hachen D, Chawla NV (2011) Predictors of short-term decay of cell phone contacts in a large scale communication network. Soc Netw 33(4):245-257 Kossinets G, Watts DJ (2006) Empirical analysis of an evolving social network. Science 311(5757):88-90 Miritello G (2013) Temporal patterns of communication in social networks. Springer, Berlin Karsai M, Perra N, Vespignani A (2014) Time varying networks and the weakness of strong ties. Sci Rep 4:4001 Miritello G, Rubén L, Cebrian M, Moro E (2013) Limited communication capacity unveils strategies for human interaction. Sci Rep 3:1950 Miritello G, Moro E, Lara R, Martínez-López R, Belchamber J, Roberts SGB, Dunbar RIM (2013) Time as a limited resource: communication strategy in mobile phone networks. Soc Netw 35(1):89-95 Saramäki J, Leicht EA, López E, Roberts SGB, Reed-Tsochas F, Dunbar RIM (2014) The persistence of social signatures in human communication. Proc Natl Acad Sci USA 111(3):942-947 Aynaud T, Fleury E, Guillaume J-L, Wang Q (2013) Communities in evolving networks: definitions, detection, and analysis techniques. In: Dynamics on and of complex networks, vol 2. Springer, Berlin, pp 159-200 Mucha PJ, Richardson T, Macon K, Porter MA, Onnela J-P (2010) Community structure in time-dependent, multiscale, and multiplex networks. Science 328(5980):876-878. http://www.sciencemag.org/content/328/5980/876.full.pdf. doi:10.1126/science.1184819 Xu KS, Kliger M, Hero AO (2010) Evolutionary spectral clustering with adaptive forgetting factor. In: IEEE international conference on acoustics speech and signal processing (ICASSP). IEEE Press, New York, pp 2174-2177 Xu KS, Kliger M, Hero AO, III (2011) Tracking communities in dynamic social networks. In: Social computing, behavioral-cultural modeling and prediction. Springer, Berlin, pp 219-226 Kovanen L, Karsai M, Kaski K, Kertész J, Saramäki J (2011) Temporal motifs in time-dependent networks. J Stat Mech Theory Exp 2011(11):11005 Kirkpatrick S, Kulakovsky A, Cebrian M, Pentland A (2012) Social networks and spin glasses. Philos Mag 92(1-3):362-377. doi:10.1080/14786435.2011.634858 Pielot M, de Oliveira R, Kwak H, Oliver N (2014) Didn't you see my message?: predicting attentiveness to mobile instant messages. In: Proceedings of the 32nd annual ACM conference on human factors in computing systems. ACM, New York, pp 3319-3328 Barabási AL (2005) The origin of bursts and heavy tails in human activity. Nature 435:207-211 Karsai M, Kivelä M, Pan RK, Kaski K, Kertész J, Barabási AL, Saramäki J (2011) Small but slow world: how network topology and burstiness slow down spreading. Phys Rev E 83(2):025102 Karsai M, Kaski K, Barabási AL, Kertész J (2012) Universal features of correlated bursty behaviour. Sci Rep 2:397 Wu Y, Zhou C, Xiao J, Kurths J, Schellnhuber HJ (2010) Evidence for a bimodal distribution in human communication. Proc Natl Acad Sci USA 107(44):18803-18808 Candia J, González MC, Wang P, Schoenharl T, Madey G, Barabási AL (2008) Uncovering individual and collective human dynamics from mobile phone records. J Phys A, Math Theor 41:224015 Karsai M (2012) Correlated dynamics in egocentric communication networks. PLoS ONE 7(7):40612. doi:10.1371/journal.pone.0040612 Malmgren RD, Stouffer DB, Motter AE, Amaral LA (2008) A Poissonian explanation for heavy tails in e-mail communication. Proc Natl Acad Sci USA 105(47):18153-18158 Jo H-H, Karsai M, Kertész J, Kaski K (2012) Circadian pattern and burstiness in mobile phone communication. New J Phys 14(1):013055 González MC, Hidalgo CA, Barabási AL (2008) Understanding individual human mobility patterns. Nature 453(7196):779-782 Song C, Koren T, Wang P, Barabási AL (2010) Modelling the scaling properties of human mobility. Nat Phys 6:818-823 Csáji B, Browet A, Traag VA, Delvenne J-C, Huens E, Van Dooren P, Smoreda Z, Blondel VD (2013) Exploring the mobility of mobile phone users. Physica A 392(6):1459-1473 Bagrow JP, Lin Y-R (2012) Mesoscopic structure and social aspects of human mobility. PLoS ONE 7(5):37676 Amini A, Kung K, Kang C, Sobolevsky S, Ratti C (2013) The differing tribal and infrastructural influences on mobility in developing and industrialized regions. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 330-339 Song C, Qu Z, Blumm N, Barabási AL (2010) Limits of predictability in human mobility. Science 327(5968):1018-1021 Calabrese F, Di Lorenzo G, Ratti C (2010) Human mobility prediction based on individual and collective geographical preferences. In: 13th international IEEE conference on Intelligent transportation systems (ITSC). IEEE Press, New York, pp 312-317 Park J, Lee DS, González MC (2010) The eigenmode analysis of human motion. J Stat Mech Theory Exp 2010:11021 Simini F, Gonzalez MC, Maritan A, Barabasi A-L (2012) A universal model for mobility and migration patterns. Nature 484(7392):96-100. doi:10.1038/nature10856 Palchykov V, Mitrovic M, Jo H-H, Saramaki J, Pan RK (2014) Inferring human mobility using communication patterns. Sci Rep 4:6174 Martino M, Calabrese F, Di Lorenzo G, Andris C, Liang L, Ratti C (2010) Ocean of information: fusing aggregate & individual dynamics for metropolitan analysis. In: Proceedings of the 15th international conference on intelligent user interfaces. ACM, New York, pp 357-360 Eagle N, Pentland AS (2009) Eigenbehaviors: identifying structure in routine. Behav Ecol Sociobiol 63(7):1057-1066 Ratti C, Williams S, Frenchman D, Pulselli RM (2006) Mobile landscapes: using location data from cell phones for urban analysis. Environ Plan B, Plan Des 33(5):727-748 Calabrese F, Colonna M, Lovisolo P, Parata D, Ratti C (2011) Real-time urban monitoring using cell phones: a case study in Rome. IEEE Trans Intell Transp Syst 12(1):141-151 Reades J, Calabrese F, Sevtsuk A, Ratti C (2007) Cellular census: explorations in urban data collection. IEEE Pervasive Comput 6:30-38 Reades J, Calabrese F, Ratti C (2009) Eigenplaces: analysing cities using the space-time structure of the mobile phone network. Environ Plan B, Plan Des 36(5):824-836 Karikoski J, Soikkeli T (2011) Contextual usage patterns in smartphone communication services. Pers Ubiquitous Comput 17(3):491-502 Jo H-H, Karsai M, Karikoski J, Kaski K (2012) Spatiotemporal correlations of handset-based service usages. EPJ Data Sci 1:10 Trestian I, Ranjan S, Kuzmanovic A, Nucci A (2009) Measuring serendipity: connecting people, locations and interests in a mobile 3G network. In: Proceedings of the 9th ACM SIGCOMM conference on Internet measurement conference. ACM, New York, pp 267-279 Naboulsi D, Stanica R, Fiore M (2014) Classifying call profiles in large-scale mobile traffic datasets. In: Conference on computer communications - INFOCOM 2014. IEEE Press, New York, pp 1806-1814 Isaacman S, Becker R, Cáceres R, Kobourov S, Rowland J, Varshavsky A (2010) A tale of two cities. In: Proceedings of the eleventh workshop on mobile computing systems and applications. ACM, New York, pp 19-24 Louail T, Lenormand M, Cantú OG, Picornell M, Herranz R, Frias-Martinez E, Ramasco JJ, Barthelemy M (2014) From mobile phone data to the spatial structure of cities. Sci Rep 4:5276 Trasarti R, Olteanu-Raimond A-M, Nanni M, Couronné T, Furletti B, Giannotti F, Smoreda Z, Ziemlicki C (2014) Discovering urban and country dynamics from mobile phone data with spatial correlation patterns. Telecommun Policy 39:347-362 Jiang S, Fiore GA, Yang Y, Ferreira J, Frazzoli E, González MC (2013) A review of urban computing for mobile phone traces: current methods, challenges and opportunities. In: Proceedings of the 2nd ACM SIGKDD international workshop on urban computing. ACM, New York, article no 2 Berlingerio M, Calabrese F, Di Lorenzo G, Nair R, Pinelli F, Sbodio M (2013) AllAboard: a system for exploring urban mobility and optimizing public transport using cellphone data. In: Blockeel H, Kersting K, Nijssen S, Železný F (eds) Machine learning and knowledge discovery in databases. LNCS, vol 8190. Springer, Berlin, pp 663-666 Nanni M, Trasarti R, Furletti B, Gabrielli L, Van Der Mede P, De Bruijn J, De Romph E, Bruil G (2013) MP4-A project: mobility planning for Africa. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 423-446 Angelakis V, Gundlegård D, Rajna B, Rydergren C, Vrotsou K, Carlsson R, Forgeat J, Hu TH, Liu EL, Moritz S, Zhao S, Zheng Y (2013) Mobility modeling for transport efficiency - analysis of travel characteristics based on mobile phone data. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 412-422 Bagrow JP, Wang D, Barabási AL (2011) Collective response of human populations to large-scale emergencies. PLoS ONE 6(3):17680 Gao L, Song C, Gao Z, Barabási AL, Bagrow JP, Wang D (2014) Quantifying information flow during emergencies. Sci Rep 4:3997 Xavier FHZ, Silveira LM, Almeida JM, Malab CHS, Ziviani A, Marques-Neto HT (2013) Understanding human mobility due to large-scale events. In: NetMob 2013 - third international conference on the analysis of mobile phone datasets Altshuler Y, Fire M, Shmueli E, Elovici Y, Bruckstein A, Pentland AS, Lazer D (2013) The social amplifier - reaction of human communities to emergencies. J Stat Phys 152(3):399-418 Kryvasheyeu Y, Chen H, Moro E, Van Hentenryck P, Cebrian M (2015) Performance of social network sensors during hurricane Sandy. PLoS ONE 10(2):e0117288. doi:10.1371/journal.pone.0117288 Wang D, Pedreschi D, Song C, Giannotti F, Barabási AL (2011) Human mobility, social ties, and link prediction. In: 17th ACM SIGKDD conference on knowledge discovery and data mining (KDD'11). ACM, New York, pp 1100-1108 Eagle N, de Montjoye YA, Bettencourt LMA (2009) Community computing: comparisons between rural and urban societies using mobile phone data. In: 2009 international conference on computational science and engineering. IEEE Press, New York, pp 144-150 Toole JL, Herrera-Yaqüe C, Schneider CM, González MC (2015) Coupling human mobility and social ties. J R Soc Interface 12(105):20141128 Peruani F, Tabourier L (2011) Directedness of information flow in mobile phone communication networks. PLoS ONE 6(12):28860 Tabourier L, Stoica A, Peruani F (2012) How to detect causality effects on large dynamical communication networks: a case study. In: Fourth international conference on communication systems and networks (COMSNETS). IEEE Press, New York, pp 1-7 Miritello G, Moro E, Lara R (2011) Dynamical strength of social ties in information spreading. Phys Rev E 83(4):045102 Newman MEJ (2002) Spread of epidemic disease on networks. Phys Rev E 66:016128. doi:10.1103/PhysRevE.66.016128 Newman MEJ, Barabasi AL, Watts DJ (2006) The structure and dynamics of networks. Princeton University Press, Princeton Kivelä M, Pan RK, Kaski K, Kertész J, Saramäki J, Karsai M (2012) Multiscale analysis of spreading in a large communication network. J Stat Mech Theory Exp 2012(3):03005 Liu YY, Slotine JJ, Barabási AL (2011) Controllability of complex networks. Nature 473(7346):167-173 Cebrian M, Lahiri M, Oliver N, Pentland A (2010) Measuring the collective potential of populations from dynamic social interaction data. IEEE J Sel Top Signal Process 4(4):677-686 Wang P, González MC, Hidalgo CA, Barabási AL (2009) Understanding the spreading patterns of mobile phone viruses. Science 324(5930):1071-1074 Wang P, González MC, Menezes R, Barabási AL (2010) New generation of mobile phone viruses and corresponding countermeasures. Arxiv preprint arXiv:1012.3156 Wang P, González MC, Menezes R, Barabási AL (2013) Understanding the spread of malicious mobile-phone programs and their damage potential. Int J Inf Secur 12(5):383-392 Baccelli F, Bolot J (2011) Modeling the economic value of location and preference data of mobile users. In: 2011 Proceedings IEEE INFOCOM. IEEE Press, New York, pp 1467-1475 Calabrese F, Ferrari L, Blondel VD (2014) Urban sensing using mobile phone network data: a survey of research. ACM Comput Surv 47(2):25 Isaacman S, Becker R, Cáceres R, Kobourov S, Martonosi M, Rowland J, Varshavsky A (2011) Identifying important places in people's lives from cellular network data. In: Pervasive computing. LNCS, vol 6696. Springer, Berlin, pp 133-151 Steenbruggen J, Borzacchiello MT, Nijkamp P, Scholten H (2011) Mobile phone data from GSM networks for traffic parameter and urban spatial pattern assessment: a review of applications and opportunities. GeoJournal 78:1-21 Toole JL, Colak S, Alhasoun F, Evsukoff A, Gonzalez MC (2014) The path most travelled: mining road usage patterns from massive call data. ArXiv preprint arXiv:1403.0636 Wang P, Hunter T, Bayen AM, Schechtner K, González MC (2012) Understanding road usage patterns in urban areas. Sci Rep 2:1001 McInerney J, Roger A, Jennings NR (2013) Crowdsourcing physical package delivery using the existing routine mobility of a local population. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 447-456 Gambs S, Killijian M-O (2013) Towards a recomender system for bush taxis. In: Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge, pp 457-466 Calabrese F, Pereira F, Di Lorenzo G, Liu L, Ratti C (2010) The geography of taste: analyzing cell-phone mobility and social events. In: Pervasive computing. LNCS, vol 6030. Springer, Berlin, pp 22-37 Quercia D, Lathia N, Calabrese F, Di Lorenzo G, Crowcroft J (2010) Recommending social events from mobile phone location data. In: IEEE international conference on data mining. IEEE Press, New York, pp 971-976 Cloquet C, Blondel VD (2014) Forecasting event attendance with anonymized mobile phone data. Big Data Res (in press) Xavier FHZ, Silveira LM, Almeida JM, Ziviani A, Malab CHS, Marques-Neto HT (2012) Analyzing the workload dynamics of a mobile phone network in large scale events. In: Proceedings of the first workshop on urban networking. ACM, New York, pp 37-42 Manfredini F, Tagliolato P, Di Rosa C (2011) Monitoring temporary populations through cellular core network data. In: Murgante B et al. (eds) Computational science and its applications - ICCSA 2011, Part II. LNCS, vol 6783. Springer, Berlin, pp 151-161 Kuusik A, Ahas R, Tiru M (2009) Analysing repeat visitation on country level with passive mobile positioning method: an Estonian case study. In: XVII scientific conference on economic policy, pp 1-3 Tizzoni M, Bajardi P, Decuyper A, Kon Kam King G, Schneider CM, Blondel VD, Smoreda Z, González MC, Colizza V (2014) On the use of human mobility proxies for modeling epidemics. PLoS Comput Biol 10(7):1003716 Frias-Martinez E, Williamson G, Frias-Martinez V (2011) An agent-based model of epidemic spread using human mobility and social network information. In: Privacy, security, risk and trust (PASSAT) and IEEE third inernational conference on social computing (SocialCom). IEEE Press, New York, pp 57-64. doi:10.1109/PASSAT/SocialCom.2011.142 Blondel VD, Esch M, Chan C, Clérot F, Deville P, Huens E, Morlot F, Smoreda Z, Ziemlicki C (2012) Data for development: the D4D challenge on mobile phone data. ArXiv preprint arXiv:1210.0137 Kafsi M, Kazemi E, Maystre L, Yartseva L, Grossglauser M, Thiran P (2013) Mitigating epidemics through mobile micro-measures. ArXiv preprint arXiv:1307.2084 Lima A, De Domenico M, Pejovic V, Musolesi M (2013) Exploiting cellular data for disease containment and information campaigns strategies in country-wide epidemics. ArXiv preprint arXiv:1306.4534 Wesolowski A, Buckee CO, Bengtsson L, Wetter E, Lu X, Tatem AJ (2014) Commentary: containing the Ebola outbreak - the potential and challenge of mobile network data. PLoS Curr Outbreaks. doi:10.1371/currents.outbreaks.0177e7fcf52217b8b634376e2f3efc5e de Montjoye YA, Kendall J, Kerry CF (2014) Enabling humanitarian use of mobile phone data. Issues in Technology Innovation 26 Bogomolov A, Lepri B, Ferron M, Pianesi F, Pentland AS (2014) Pervasive stress recognition for sustainable living. In: IEEE international conference on pervasive computing and communications workshops (PERCOM workshops). IEEE Press, New York, pp 345-350 Bogomolov A, Lepri B, Ferron M, Pianesi F, Pentland AS (2014) Daily stress recognition from mobile phone data, weather conditions and individual traits. In: Proceedings of the ACM international conference on multimedia. ACM, New York, pp 477-486 Katz E, Lazarsfeld PF (1970) Personal influence, the part played by people in the flow of mass communications. Transaction Publishers, New Brunswick Watts DJ, Dodds PS (2007) Influentials, networks, and public opinion formation. J Consum Res 34(4):441-458 Szabó G, Barabási AL (2006) Network effects in service usage. Arxiv preprint arXiv:physics/0611177 Hill S, Provost F, Volinsky C (2006) Network-based marketing: identifying likely adopters via consumer networks. Stat Sci 21(2):256-276 Aharony N, Pan W, Ip C, Pentland A (2010) Tracing mobile phone app installations in the 'friends and family' study. In: Proceedings of the 2010 workshop on information in networks (WIN'10) Risselada H, Verhoef PC, Bijmolt THA (2014) Dynamic effects of social influence and direct marketing on the adoption of high-technology products. J Mark 78(2):52-68 Backlund V-P, Saramäki J, Pan RK (2014) Effects of temporal correlations on cascades: threshold models on temporal networks. Phys Rev E 89:062815. doi:10.1103/PhysRevE.89.062815 Blondel V, de Kerchove C, Huens E, Van Dooren P (2006) Social leaders in graphs. Lecture notes in control and information sciences, vol 341, pp 231-237 de Kerchove d'Exaerde C (2009) Ranking large networks: leadership. optimization and distrust. PhD thesis Ferrara E, De Meo P, Catanese S, Fiumara G (2014) Detecting criminal organizations in mobile phone networks. Expert Syst Appl 41(13):5733-5750 Catanese S, Ferrara E, Fiumara G (2013) Forensic analysis of phone call networks. Soc Netw Anal Min 3(1):15-33 Bogomolov A, Lepri B, Staiano J, Oliver N, Pianesi F, Pentland A (2014) Once upon a crime: towards crime prediction from demographics and mobile data. In: Proceedings of the 16th international conference on multimodal interaction. ACM, New York, pp 427-434 Blondel VD, de Cordes N, Decuyper A, Deville P, Raguenez J, Smoreda Z (eds) (2013) Mobile phone data for development - analysis of mobile phone datasets for the development of Ivory Coast. Orange D4D challenge Salnikov V, Schien D, Youn H, Lambiotte R, Gastner MT (2014) The geography and carbon footprint of mobile phone use in Côte d'Ivoire. EPJ Data Sci 3:3 Frias-Martinez V, Virseda J (2012) On the relationship between socio-economic factors and cell phone usage. In: Proceedings of the fifth international conference on information and communication technologies and development. ACM, New York, pp 76-84 Buckee CO, Wesolowski A, Eagle NN, Hansen E, Snow RW (2013) Mobile phones and malaria: modeling human and parasite travel. Travel Med Infect Dis 11(1):15-22 Onnela JP, Christakis NA (2012) Spreading paths in partially observed social networks. Phys Rev E 85(3):036106 Ranjan G, Zang H, Zhang ZL, Bolot J (2012) Are call detail records biased for sampling human mobility? ACM SIGMOBILE Mob Comput Commun Rev 16(3):33-44 Singel R (2009) Netflix spilled your Brokeback Mountain secret, lawsuit claims. Threat Level (blog), Wired Barth-Jones DC (2012) The 're-identification' of governor William Weld's medical information: a critical re-examination of health data Identification risks and privacy protections, then and now E Commission (2012) Commission proposes a comprehensive reform of data protection rules to increase users' control of their data and to cut costs for businesses. Reference IP/12/46 EU (1995) Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Off J EC 38:31-50 Jay RP (ed) (2014) Data protection and privacy in 26 juristictions worldwide. Gideon Roberton European Commission (2012) Commission proposes a comprehensive reform of data protection rules to increase users' control of their data and to cut costs for businesses. http://europa.eu/rapid/press-release_IP-12-46_en.htm?locale=en Zang H, Bolot J (2011) Anonymization of location data does not work: a large-scale measurement study. In: ACM MobiCom. ACM, New York, pp 145-156 de Montjoye YA, Hidalgo CA, Verleysen M, Blondel VD (2013) Unique in the Crowd: the privacy bounds of human mobility. Sci Rep 3:1376 Backstrom L, Dwork C, Kleinberg J (2007) Wherefore art thou r3579x?: anonymized social networks, hidden patterns, and structural steganography. In: Proceedings of the 16th international conference on World Wide Web. ACM, New York, pp 181-190 Narayanan A, Shmatikov V (2009) De-anonymizing social networks. In: 30th IEEE symposium on security and privacy. IEEE Press, New York, pp 173-187 Sweeney L (2000) Simple demographics often identify people uniquely. Health (San Francisco) 671:1-34 Song Y, Dahlmeier D, Bressan S (2014) Not so unique in the crowd: a simple and effective algorithm for anonymizing location data. In: Proceeding of the 1st international workshop on privacy-preserving IR: when information retrieval meets privacy and security (PIR 2014), pp 19-24 Telecom Italia big data challenge - open data. http://theodi.fbk.eu/openbigdata/ Sweeney L (2002) k-Anonymity: a model for protecting privacy. Int J Uncertain Fuzziness Knowl-Based Syst 10(5):557-570 Isaacman S, Becker R, Cáceres R, Martonosi M, Rowland J, Varshavsky A, Willinger W (2012) Human mobility modeling at metropolitan scales. In: Proceedings of the 10th international conference on mobile systems, applications, and services. ACM, New York, pp 239-252 Mir DJ, Isaacman S, Cáceres R, Martonosi M, Wright RN (2013) DP-WHERE: differentially private modeling of human mobility. In: IEEE international conference on big data. IEEE Press, New York, pp 580-588 Madan A, Waber BN, Ding M, Kominers P, Pentland AS (2009). Reality mining and personal privacy Landau S (2013) Making sense from Snowden. IEEE Secur Priv 11(4):54-63 Pentland A (2009) Reality mining of mobile communications: toward a new deal on data. The global information technology report 2008-2009, 1981 Eagle N (2009) Engineering a common good: fair use of aggregated, anonymized behavioral data. In: First international forum on the application and management of personal electronic information Hung SY, Yen DC, Wang HY (2006) Applying data mining to telecom churn management. Expert Syst Appl 31(3):515-524 Dasgupta K, Singh R, Viswanathan B, Chakraborty D, Mukherjea S, Nanavati AA, Joshi A (2008) Social ties and their relevance to churn in mobile telecom networks. In: Proceedings of the 11th international conference on extending database technology: advances in database technology. ACM, New York, pp 668-677 Richter Y, Yom-Tov E, Slonim N (2010) Predicting customer churn in mobile networks through analysis of social groups. In: Proceedings of the 2010 SIAM international conference on data mining (SDM 2010). SIAM, Philadelphia, pp 732-741 Dierkes T, Bichler M, Krishnan R (2011) Estimating the effect of word of mouth on churn and cross-buying in the mobile phone market with Markov logic networks. Decision Support Systems 51(3):361-371 Fitkov-Norris ED, Khanifar A (2001) Dynamic pricing in cellular networks, a mobility model with a provider-oriented approach. In: Second international conference on 3G mobile communication technologies. IET, London, pp 63-67 Barabási AL (2010) You're so predictable. Phys World 23:22-26 Ghoshal G, Barabási AL (2011) Ranking stability and super-stable nodes in complex networks. Nat Commun 2:394 Afripop Project. http://www.worldpop.org.uk We would like to thank Franscesco Calabrese, Yves-Alexandre de Montjoye, Vanessa Frias-Martinez, Marta González, Jukka-Pekka Onnela, Jari Saramäki and Zbigniew Smoreda for their valuable comments and advice in finalizing this survey. AD is a research fellow with the Fonds de la Recherche Scientifique - FNRS. Department of Applied Mathematics, Université catholique de Louvain, Avenue Georges Lemaitre, 4, Louvain-La-Neuve, 1348, Belgium Vincent D Blondel, Adeline Decuyper & Gautier Krings Real Impact Analytics, Place Flagey, 7, Brussels, 1050, Belgium Gautier Krings Vincent D Blondel Adeline Decuyper Correspondence to Vincent D Blondel. VB conceived and supervised the study. AD and GK reviewed the papers and drafted the manuscript. All authors edited and approved the final version of the paper. Blondel, V.D., Decuyper, A. & Krings, G. A survey of results on mobile phone datasets analysis. EPJ Data Sci. 4, 10 (2015). https://doi.org/10.1140/epjds/s13688-015-0046-0 mobile phone datasets big data analysis temporal networks geographical networks
CommonCrawl
Publisher: Institute of Mathematics NAS of Ukraine UMZh Ukrains'kyi Matematychnyi Zhurnal Volume 71, № 11, 2019 (Current Issue) Ukr. Mat. Zh. Volume 56, № 5, 2004 Article (Russian) Kolmogorov-type inequalities for mixed derivatives of functions of many variables Babenko V. F., Korneichuk N. P., Pichugov S. A. ↓ Abstract | Full text (.pdf) Ukr. Mat. Zh. - 2004. - 56, № 5. - pp. 579-594 Let $γ = (γ_1 ,..., γ_d )$ be a vector with positive components and let $D^γ$ be the corresponding mixed derivative (of order $γ_j$ with respect to the $j$ th variable). In the case where $d > 1$ and $0 < k < r$ are arbitrary, we prove that $$\sup_{x \in L^{r\gamma}_{\infty}(T^d)D^{r\gamma}x\neq0} \frac{||D^{k\gamma}x||_{L_{\infty}(T^d)}}{||x||^{1-k/r}||D^{r\gamma}||^{k/r}_{L_{\infty}(T^d)}} = \infty$$ and $$||D^{k\gamma}x||_{L_{\infty}(T^d)} \leq K||x||^{1 - k/r}_{L_{\infty}(T^d)}||D^{r\gamma}x||_{L_{\infty}(T^d)}^{k/r} \left(1 + \ln^{+}\frac{||D^{r\gamma}x||_{L_{\infty}(T^d)}}{||x||_{L_{\infty} (T^d)}}\right)^{\beta}$$ for all $x \in L^{r\gamma}_{\infty}(T^d)$ Moreover, if \(\bar \beta \) is the least possible value of the exponent β in this inequality, then $$\left( {d - 1} \right)\left( {1 - \frac{k}{r}} \right) \leqslant \bar \beta \left( {d,\gamma ,k,r} \right) \leqslant d - 1.$$ Jackson-type inequalities and exact values of widths of classes of functions in the spaces $S^p , 1 ≤ p < ∞$ Vakarchuk S. B. Ukr. Mat. Zh. - 2004. - 56, № 5. - pp. 595–605 In the spaces $S^p , 1 ≤ p < ∞$, introduced by Stepanets, we obtain exact Jackson-type inequalities and compute the exact values of widths of classes of functions determined by averaged moduli of continuity of order $m$. On configurations of subspaces of a Hilbert space with fixed angles between them Popova N. D., Vlasenko M. A. We investigate the set of irreducible configurations of subspaces of a Hilbert space for which the angle between every two subspaces is fixed. This is the problem of *-representations of certain algebras generated by idempotents and depending on parameters (on the set of angles). We separate the class of problems of finite and tame representation type. For these problems, we indicate conditions on angles under which the configurations of subspaces exist and describe all irreducible representations. On random measures on spaces of trajectories and strong and weak solutions of stochastic equations Dorogovtsev A. A. We investigate stationary random measures on spaces of sequences or functions. A new definition of a strong solution of a stochastic equation is proposed. We prove that the existence of a weak solution in the ordinary sense is equivalent to the existence of a strong measure-valued solution. On algebras of the Temperley-Lieb type associated with algebras generated by generators with given spectrum Zavodovskii M. V. We introduce and study algebras of the Temperley-Lieb type associated with algebras generated by linearly connected generators with given spectrum. We study their representations and the sets of parameters for which representations of these algebras exist. On the solution of a one-dimensional stochastic differential equation with singular drift coefficient Kulik A. M. We determine generalized diffusion coefficients and describe the structure of local times for a process defined as a solution of a one-dimensional stochastic differential equation with singular drift coefficient. Article (Ukrainian) Properties of solutions of the cauchy problem for essentially infinite-dimensional evolution equations Mal'tsev A. Yu. We investigate properties of solutions of the Cauchy problem for evolution equations with essentially infinite-dimensional elliptic operators. Approximation of $\bar {\omega}$ -integrals of continuous functions defined on the real axis by Fourier operators Sokolenko I. V. We obtain asymptotic formulas for the deviations of Fourier operators on the classes of continuous functions $C^{ψ}_{∞}$ and $\hat{C}^{\bar{\psi} } H_{\omega}$ in the uniform metric. We also establish asymptotic laws of decrease of functionals characterizing the problem of the simultaneous approximation of $\bar{\psi}$-integrals of continuous functions by Fourier operators in the uniform metric. Brief Communications (English) Entire solutions of the euler—poisson equations Belyaev A. V. All entire solutions of Euler—Poisson equations are presented. Brief Communications (Russian) Second-order moment equations for a system of differential equations with random right-hand side Dzhalladova I. A., Valeyev K. G. We present a method for the derivation of second-order moment equations for solutions of a system of nonlinear equations that depends on a finite-valued semi-Markov or Markov process. For systems of linear differential equations with random coefficients, the case where the inhomogeneous part contains white noise is considered. Rarefaction of moving diffusion particles Gasanenko V. A., Roitman A. B. We investigate a flow of particles moving along a tube together with gas. The dynamics of particles is determined by a stochastic differential equation with different initial states. The walls of the tube absorb particles. We prove that if the incoming flow of particles is determined by a random Poisson measure, then the number of remained particles is characterized by the Poisson distribution. The parameter of this distribution is constructed by using a solution of the corresponding parabolic boundary-value problem. Solution of a nonlinear singular integral equation with quadratic nonlinearity Gun'ko O. V. Using methods of the theory of boundary-value problems for analytic functions, we prove a theorem on the existence of solutions of the equation $$u^2 \left( t \right) + \left( {\frac{1}{\pi }\int\limits_{ - \infty }^\infty {\frac{{u\left( \tau \right)}}{{\tau - t}}d\tau } } \right)^2 = A^2 \left( t \right)$$ and determine the general form of a solution by using zeros of an entire function $A^2 (z)$ of exponential type. Brief Communications (Ukrainian) Integral conditions for the invertibility of Markov chains on a half-line with general measure of irreducibility Filonov Yu. P., Isakova T. I. We present conditions for the invertibility of Markov chains with values from ℝ+ and general measure of irreducibility. The results are obtained by the classical method of test functions combined with the method of perturbation of partial potentials. Continuous procedure of stochastic approximation in a semi-Markov medium Chabanyuk Ya. M. Using the Lyapunov function for an averaged system, we establish conditions for the convergence of the procedure of stochastic approximation $$du(t)=a(t)[C(u(t),x(t))dt+σ(u(t))dw(t)]$$ in a random semi-Markov medium described by an ergodic semi-Markov process $x(t)$. E-mail: [email protected] © 2020 Ukrains'kyi Matematychnyi Zhurnal (Ukr. Mat. Zh.)
CommonCrawl
Erdős–Nicolas number In number theory, an Erdős–Nicolas number is a number that is not perfect, but that equals one of the partial sums of its divisors. That is, a number n is Erdős–Nicolas number when there exists another number m such that $\sum _{d\mid n,\ d\leq m}d=n.$[1] Erdős–Nicolas number Named afterPaul Erdős, Jean-Louis Nicolas Publication year1975 Author of publicationErdős, P., Nicolas, J. L. Subsequence ofAbundant numbers First terms24, 2016, 8190 Largest known term3304572752464376776401640967110656 OEIS index • A194472 • Erdős-Nicolas numbers The first ten Erdős–Nicolas numbers are 24, 2016, 8190, 42336, 45864, 392448, 714240, 1571328, 61900800 and 91963648. (OEIS: A194472) They are named after Paul Erdős and Jean-Louis Nicolas, who wrote about them in 1975.[2] See also • Descartes number, another type of almost-perfect numbers References 1. De Koninck, Jean-Marie (2009). Those Fascinating Numbers. p. 141. ISBN 978-0-8218-4807-4. 2. Erdős, P.; Nicolas, J.L. (1975), "Répartition des nombres superabondants" (PDF), Bull. Soc. Math. France, 79 (103): 65–90, doi:10.24033/bsmf.1793, Zbl 0306.10025 Divisibility-based sets of integers Overview • Integer factorization • Divisor • Unitary divisor • Divisor function • Prime factor • Fundamental theorem of arithmetic Factorization forms • Prime • Composite • Semiprime • Pronic • Sphenic • Square-free • Powerful • Perfect power • Achilles • Smooth • Regular • Rough • Unusual Constrained divisor sums • Perfect • Almost perfect • Quasiperfect • Multiply perfect • Hemiperfect • Hyperperfect • Superperfect • Unitary perfect • Semiperfect • Practical • Erdős–Nicolas With many divisors • Abundant • Primitive abundant • Highly abundant • Superabundant • Colossally abundant • Highly composite • Superior highly composite • Weird Aliquot sequence-related • Untouchable • Amicable (Triple) • Sociable • Betrothed Base-dependent • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith Other sets • Arithmetic • Deficient • Friendly • Solitary • Sublime • Harmonic divisor • Descartes • Refactorable • Superperfect
Wikipedia
Slutsky's theorem In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.[1] The theorem was named after Eugen Slutsky.[2] Slutsky's theorem is also attributed to Harald Cramér.[3] Statement Let $X_{n},Y_{n}$ be sequences of scalar/vector/matrix random elements. If $X_{n}$ converges in distribution to a random element $X$ and $Y_{n}$ converges in probability to a constant $c$, then • $X_{n}+Y_{n}\ {\xrightarrow {d}}\ X+c;$ • $X_{n}Y_{n}\ \xrightarrow {d} \ Xc;$ • $X_{n}/Y_{n}\ {\xrightarrow {d}}\ X/c,$   provided that c is invertible, where ${\xrightarrow {d}}$ denotes convergence in distribution. Notes: 1. The requirement that Yn converges to a constant is important — if it were to converge to a non-degenerate random variable, the theorem would be no longer valid. For example, let $X_{n}\sim {\rm {Uniform}}(0,1)$ and $Y_{n}=-X_{n}$. The sum $X_{n}+Y_{n}=0$ for all values of n. Moreover, $Y_{n}\,\xrightarrow {d} \,{\rm {Uniform}}(-1,0)$, but $X_{n}+Y_{n}$ does not converge in distribution to $X+Y$, where $X\sim {\rm {Uniform}}(0,1)$, $Y\sim {\rm {Uniform}}(-1,0)$, and $X$ and $Y$ are independent.[4] 2. The theorem remains valid if we replace all convergences in distribution with convergences in probability. Proof This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (X, c) (see here). Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 are continuous (for the last function to be continuous, y has to be invertible). See also • Convergence of random variables References 1. Goldberger, Arthur S. (1964). Econometric Theory. New York: Wiley. pp. 117–120. 2. Slutsky, E. (1925). "Über stochastische Asymptoten und Grenzwerte". Metron (in German). 5 (3): 3–89. JFM 51.0380.03. 3. Slutsky's theorem is also called Cramér's theorem according to Remark 11.1 (page 249) of Gut, Allan (2005). Probability: a graduate course. Springer-Verlag. ISBN 0-387-22833-0. 4. See Zeng, Donglin (Fall 2018). "Large Sample Theory of Random Variables (lecture slides)" (PDF). Advanced Probability and Statistical Inference I (BIOS 760). University of North Carolina at Chapel Hill. Slide 59. Further reading • Casella, George; Berger, Roger L. (2001). Statistical Inference. Pacific Grove: Duxbury. pp. 240–245. ISBN 0-534-24312-6. • Grimmett, G.; Stirzaker, D. (2001). Probability and Random Processes (3rd ed.). Oxford. • Hayashi, Fumio (2000). Econometrics. Princeton University Press. pp. 92–93. ISBN 0-691-01018-8.
Wikipedia
Claire Mathieu Claire Mathieu (formerly Kenyon, born 1965[1]) is a French computer scientist and mathematician, known for her research on approximation algorithms, online algorithms, and auction theory. She works as a director of research at the Centre national de la recherche scientifique.[2] Claire Mathieu Born (1965-03-09) 9 March 1965 Caen Mathieu earned her Ph.D. in 1988 from the University of Paris-Sud, under the supervision of Claude Puech.[3] She worked at CNRS and ENS Lyon from 1991 to 1997, at Paris-Sud from 1997 to 2002, at the École Polytechnique from 2002 to 2004, and at Brown University from 2004 to 2011 before returning to CNRS in 2012.[2][4] She was an invited speaker at the 2014 International Colloquium on Automata, Languages and Programming[5] and at the 2015 Symposium on Discrete Algorithms.[6] She won the CNRS Silver Medal in 2019.[7] In 2020, she became a Chevalier of the Légion d'honneur. References 1. Birth year from ISNI authority control file, retrieved 2018-11-29. 2. Page personnelle de Claire Mathieu, École Normale Supérieure, retrieved 2016-03-28. 3. Claire Mathieu at the Mathematics Genealogy Project 4. Mathieu, Claire (2010), Curriculum vitae (PDF), Brown University. 5. Claire Mathieu, Invited Talks, International Colloquium on Automata, Languages and Programming, 2014. 6. Invited Presentations, ACM-SIAM Symposium on Discrete Algorithms 2015, Society for Industrial and Applied Mathematics, 2015. 7. Talents, CNRS, retrieved 2022-03-09 Authority control International • ISNI • VIAF National • France • BnF data • Czech Republic Academics • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • 2 • zbMATH Other • IdRef
Wikipedia
Title: Internet Movie Database Subject: List of actors who have played multiple roles in the same film, List of films at the 2008 Sundance Film Festival, List of Kannada films of 2013, Scouting in popular culture, Chattanooga, Tennessee Collection: Amazon.Com Acquisitions, American Websites, Entertainment Internet Forums, Film Websites, Internet Properties Established in 1990, Online Databases, Online Film Databases, Recommender Systems, Social Cataloging Applications, Television Websites, Webby Award Winners Internet Movie Database (IMDb) .com.imdbwww Commercial? Online database for movies, television, and video games Registration is optional for members to participate in discussions, comments, ratings, and voting. Col Needham (CEO) October 17, 1990 (1990-10-17) 49 (July 2015)[1] The Internet Movie Database (abbreviated IMDb) is an online database of information related to films, television programs, and video games, including cast, production crew, fictional characters, biographies, plot summaries, trivia and reviews. Actors and crew can post their own résumé and upload photos of themselves for a yearly fee. U.S. users can also view over 6,000 movies and television shows from CBS, Sony, and various independent film makers. Launched in 1990 by professional computer programmer Col Needham, the company was incorporated in the UK as Internet Movie Database Ltd in 1996, with revenue generated through advertising, licensing, and partnerships. In 1998, it became a subsidiary of Amazon.com, who were then able to use it as an advertising resource for selling DVDs and videotapes. As of September 2015, IMDb had approximately 3.4 million titles (includes episodes) and 6.7 million personalities in its database,[2] as well as 60 million registered users and is an Alexa Top 50 site. The site enables registered users to submit new material and request edits to existing entries. Although all data is checked before going live, the system has been open to abuse, and occasional errors are acknowledged. Users are also invited to rate any film on a scale of 1 to 10, and the totals are converted into a weighted mean-rating that is displayed beside each title, with online filters employed to deter ballot-stuffing. The site also features message boards, which stimulate regular debates among authenticated users. History before website 1.1 On the web 1.2 As an independent company 1.3 As Amazon.com subsidiary 1.4 Television episodes 2 Characters' filmography 3 Instant viewing 4 Content and format 5 Data provided by subjects 5.1 Copyright, vandalism and error issues 5.2 Data format and access 5.3 Film titles 5.4 Ancillary features 6 User ratings of films 6.1 Film rankings (IMDb Top 250) 6.1.1 Fan activity 6.2 History before website IMDb originated with a Usenet posting by British film fan and professional computer programmer Col Needham entitled "Those Eyes", about actresses with beautiful eyes. Others with similar interests soon responded with additions or different lists of their own. Needham subsequently started a (male) "Actors List", while Dave Knight began a "Directors List", and Andy Krieg took over "THE LIST" from Hank Driskill, which would later be renamed the "Actress List". Both lists had been restricted to people who were alive and working, but soon retired people were added so Needham started what was then (but did not remain) a separate "Dead Actors/Actresses List". The goal of the participants now was to make the lists as inclusive as possible. By late 1990, the lists included almost 10,000 movies and television series correlated with actors and actresses appearing therein. On October 17, 1990, Needham developed and posted a collection of Unix shell scripts which could be used to search the four lists, and thus the database that would become the IMDb was born.[3] At the time, it was known as the "rec.arts.movies movie database", but by 1993 had been moved out of the Usenet group as an independent website underwritten and controlled by Needham and personal followers. Other website users were invited to contribute data which they may have collected and verified, on a volunteer basis, which greatly increased the amount and types of data to be stored. Entire new sections were added. As the site grew huge, full production crews, uncredited performers and other demographic data were added. Needham's group allowed some advertising to support ongoing operations of the site, including the hiring of full-time paid data managers. All the primary staff came (and still come) from the burgeoning computer industry and/or training schools and did not have extensive expertise in visual media. In 1998, unable to secure sufficient funding from limited advertising, contributions and unable to raise support from the visual media industries or academia, Needham sold the IMDb site to Amazon.com, on condition that its operation would remain in the hands of Needham and his small cadre of managers, who soon were able to move into full-time paid staff positions. The database had been expanded to include additional categories of filmmakers and other demographic material, as well as trivia, biographies, and plot summaries. The movie ratings had been properly integrated with the list data and a centralized email interface for querying the database had been created by Alan Jay. Later in the year it moved onto the World Wide Web (a network in its infancy at that time) under the name of Cardiff Internet Movie Database.[4] The database resided on the servers of the computer science department of Cardiff University in Wales. Rob Hartill was the original web interface author. In 1994 the email interface was revised to accept the submission of all information meaning that people no longer had to email the specific list maintainer with their updates. However, the structure remained that information received on a single film was divided among multiple section managers, the sections being defined and determined by categories of film personnel and the individual filmographies contained therein. Over the next few years, the database was run on a network of mirrors across the world with donated bandwidth. The website is Perl-based.[5] As of May 2011, the site has been filtered in China for more than one year, although many users address it through proxy server or by VPN.[6] On October 17, 2010, IMDb launched original video (www.imdb.com/20) in celebration of its 20th anniversary.[7] As an independent company In 1996 IMDb was incorporated in the United Kingdom, becoming the Internet Movie Database Ltd. Founder Col Needham became the primary owner as well as the figurehead. General revenue for site operations was generated through advertising, licensing and partnerships. As Amazon.com subsidiary In 1998, Jeff Bezos, founder, owner and CEO of Amazon.com, struck a deal with Col Needham and other principal shareholders to buy IMDb outright and attach it to Amazon as a subsidiary, private company.[8] This gave IMDb the ability to pay the shareholders salaries for their work, while Amazon.com would be able to use the IMDb as an advertising resource for selling DVDs and videotapes. IMDb continued to expand its functionality. On January 15, 2002, it added a subscription service known as IMDbPro, aimed at entertainment professionals. IMDbPro was announced and launched at the 2002 Sundance Film Festival. It provides a variety of services including film production and box office details, as well as a company directory. As an additional incentive for users, as of 2003, users identified as one of "the top 100 contributors" of hard data received complimentary free access to IMDbPro for the following calendar year; for 2006 this was increased to the top 150 contributors, and for 2010 to the top 250.[9] In 2008 IMDb launched their first official foreign language version with the German IMDb.de. Also in 2008, IMDb acquired two other companies, Withoutabox and Box Office Mojo. Television episodes On January 26, 2006, "Full Episode Support" came online, allowing the database to support separate cast and crew listings for each episode of every television series. This was described by Col Needham as "the largest change we've ever made to our data model", and increased the number of titles in the database from 485,000 to nearly 755,000. Characters' filmography On October 2, 2007, the characters' filmography was added. Character entries are created from character listings in the main filmography database, and as such do not need any additional verification by IMDb staff. They have already been verified when they are added to the main filmography. Instant viewing On September 15, 2008, a feature was added that enables instant viewing of over 6,000 movies and television shows from CBS, Sony and a number of independent film makers, with direct links from their profiles.[10] Due to licensing restrictions, this feature is only available to viewers in the United States.[11] Data provided by subjects In 2006, IMDb introduced its "Résumé Subscription Service", where actors and crew can post their own résumé and upload photos of themselves[12] for a yearly fee.[13] The base annual charge for including a photo with an account was $39.95 until 2010, when it was increased to $54.95. IMDb résumé pages are kept on a sub-page of the regular entry about that person, with a regular entry automatically created for each résumé subscriber who does not already have one.[14] As of 2012, Resume Services is now included as part of an IMDbPro subscription, and is no longer offered as a separate subscription service. Copyright, vandalism and error issues All volunteers who contribute content to the database technically retain copyright on their contributions but the compilation of the content becomes the exclusive property of IMDb with the full right to copy, modify, and sublicense it and they are verified before posting.[15] Credit is not given on specific title or filmography pages to the contributor(s) who have provided information. Conversely, a credited text entry, such as a plot summary, may be "corrected" for content, grammar, sentence structure, perceived omission or error, by other contributors without having to add their names as co-authors. Due to the process of having the submitted data or text reviewed by a section manager, IMDb is different from database projects like WorldHeritage, Discogs or OpenStreetMap in that contributors cannot add, delete, or modify the data or text on impulse, and the manipulation of data is controlled by IMDb technology and salaried staff.[16] IMDb has been subject to deliberate additions of false information; in 2012 a spokesperson said: "We make it easy for users and professionals to update much of our content, which is why we have an 'edit page.' The data that is submitted goes through a series of consistency checks before it goes live. Given the sheer volume of the information, occasional mistakes are inevitable, and, when reported, they are promptly fixed. We always welcome corrections."[17] The Java Movie Database (JMDB)[18] is reportedly creating an IMDb_Error.log file that lists all the errors found while processing the IMDb plain text files. A Wiki alternative to IMDb is Open Media Database[19] whose content is also contributed by users but licensed under CC-by and the GFDL. Since 2007, IMDb has been experimenting with wiki-programmed sections for complete film synopses, parental guides, and FAQs about titles as determined by (and answered by) individual contributors. Data format and access IMDb does not provide an API for automated queries. However, most of the data can be downloaded as compressed plain text files and the information can be extracted using the command-line interface tools provided.[20] Beside that there is the Java-based graphical user interface (GUI) application available which is able to process the compressed plain text files and allow to search and display the information.[18] This GUI application supports different languages but the movie related data is of course English as made available by IMDb. A Python package called IMDbPY can also be used to process the compressed plain text files into a number of different SQL databases, enabling easier access to the entire dataset for searching or data mining.[21] Film titles The IMDb has sites in English as well as versions translated completely or in part into other languages (Danish,Finnish, French, German, Hungarian, Italian, Polish, Portuguese and Romanian). The non-English language sites display film titles in the specified language. While originally the IMDb's English language sites displayed titles according to their original country-of-origin language, in 2010 the IMDb began allowing individual users in the UK and USA to choose primary title display by either the original-language titles, or the US or UK release title (normally, in English). Ancillary features User ratings of films As one adjunct to data, the IMDb offers a rating scale that allows users to rate films on a scale of one to ten. The rating system has been claimed to be flawed for several reasons.[22][23] IMDb indicates that submitted ratings are filtered and weighted in various ways in order to produce a weighted mean that is displayed for each film, series, and so on. It states that filters are used to avoid ballot stuffing; the method is not described in detail to avoid attempts to circumvent it. In fact, it sometimes produces an extreme difference between the weighted average and the arithmetic mean. Film rankings (IMDb Top 250) The IMDb Top 250 list is a listing of the top rated 250 films of all-time, based on ratings by the registered users of the website using the methods described. Currently, The Shawshank Redemption is #1 on the list.[24] The 'top 250' rating is based on only the ratings of "regular voters". The exact number of votes a registered user would have to make to be considered to be a user who votes regularly has been kept secret. IMDb has stated that to maintain the effectiveness of the top 250 list they "deliberately do not disclose the criteria used for a person to be counted as a regular voter".[25] In addition to other weightings, the top 250 films are also based on a weighted rating formula referred to in actuarial science as a credibility formula.[26] This label arises because a statistic is taken to be more credible the greater the number of individual pieces of information; in this case from eligible users who submit ratings. Though the current formula is not disclosed, IMDb originally used the following formula to calculate their weighted rating:[27][28] W = \frac{Rv + Cm}{v+m} W\ = weighted rating R\ = average for the movie as a number from 0 to 10 (mean) = (Rating) v\ = number of votes for the movie = (votes) m\ = minimum votes required to be listed in the Top 250 (currently 25,000) C\ = the mean vote across the whole report (currently 7.0) The W\ in this formula is equivalent to a Bayesian posterior mean (See Bayesian statistics). The IMDb also has a Bottom 100 feature which is assembled through a similar process although only 1500 votes must be received to qualify for the list.[29] The top 250 list comprises a wide range of feature films, including major releases, cult films, independent films, critically acclaimed films, silent films and non-English language films. Short films and TV episodes are not included. Fan activity One of the most used features of the Internet Movie Database is the message boards that coincide with every title (excepting, as of 2013, TV episodes[30]) and name entry, along with over 140 main boards. This section is one of the more recent features of IMDb, having its beginnings in 2001. In order to post on the message boards a user needs to "authenticate" their account via cell phone, credit card, or by having been a recent customer of the parent company Amazon.com. Message boards have expanded in recent years. The Soapbox started in 1999 is a general message board meant for debates on any subject. The Politics board started in 2007 is a message board to discuss politics, news events and current affairs as well as history and economics. Both these message boards have become the most popular message boards in IMDb, more popular on a long term basis than any individual movie message board. In 2011, in the case of Hoang v. Amazon.com, IMDb was sued by an anonymous actress for more than US$1 million due to IMDb's revealing her age (40, at the time).[31] The actress claimed that revealing her age could cause her to lose acting opportunities.[32] Judge Marsha J. Pechman, a U.S. district judge in Seattle, dismissed the lawsuit, saying the actress had no grounds to proceed with an anonymous complaint. The actress re-filed and so revealed that the complainant is Huong Hoang of Texas, who uses the stage name Junie Hoang.[33] In 2013, Pechman dismissed all causes of action except for a breach of contract claim against IMDb; a jury then sided with IMDb on that claim.[34] As of February 2015, the case against IMDb remains under appeal.[35][36] Also in 2011, in the case of United Video Properties Inc., et al. v. Amazon.Com Inc. et al.,[37] IMDb and Amazon were sued by Rovi Corporation and others for patent infringement over their various program listing offerings.[38] The patent claims were ultimately construed in a way favorable to IMDb and Rovi/United Video Properties lost the case, though as of November 2014 it is on appeal.[39] AllMusic – a similar database, but for music AllRovi – a commercial database launched by the Rovi Corporation that compiles information from the former services Allmovie and Allmusic Animator.ru Big Cartoon DataBase DBCult Film Institute Filmweb FindAnyFilm.com Flickchart Internet Adult Film Database Internet Movie Firearms Database (IMFDb) Internet Book Database (IBookDb) Internet Broadway Database (IBDb) Internet Off-Broadway Database (IOBDb) Internet Speculative Fiction Database (ISFDb) Internet Theatre Database (ITDb) List of films considered the best List of films considered the worst TheTVDB ^ "Imdb.com Site Info". ^ "Stats". IMDb. Retrieved September 10, 2015. ^ Chmielewski, Dawn C. (January 19, 2013), "Col Needham created IMDb", Los Angeles Times ^ "Historical Internet Movie Database Site". Cardiff School of Computer Science & Informatics. Retrieved March 21, 2013. ^ What software/hardware are you using to run the site? imdb.com ^ Chacksfield, Marc (January 14, 2010). "China blocks number-one movie site IMDb". 2012 Future US, Inc. ^ Ehlrich, Brenna (September 30, 2010). "IMDb Turns 20, Launches Original Video to Celebrate". mashable.com. ^ "INTERNET BOOKSELLER AMAZON.COM ANNOUNCES ACQUISITION OF UNITED KINGDOM COMPANY THE INTERNET MOVIE DATABASE LTD.". IMDb via PR Newswire Europe. Retrieved January 15, 2007. ^ Needham, Col (January 1, 2011). "IMDb announcement: Top 250 Contributors for 2010". IMDb Contributors Top Contributors. Retrieved August 25, 2011. ^ Hoffman, Harrison (September 15, 2008). "IMDb now serves full-length videos". ^ Modine, Austin (September 16, 2008). "IMDb adds full-length streaming movies (Show your US ID card at the door)". The Register. Retrieved September 17, 2008. ^ Lycos Europe and IMDb sign sales agreement for 9 European markets. Lycos Europe press release, July 10, 2006. ^ IMDb Resume FAQ: Can I subscribe only for one month or one year?. Retrieved January 22, 2008. ^ IMDb Resume FAQ: Is there any difference between a regular IMDb name page and an IMDb name page created via IMDb Resume?. Retrieved January 22, 2008. ^ IMDb Copyright and Conditions of Use. imdb.com ^ The Plain Text Data Files IMDb – Alternate Interfaces ^ "Which A-List Star Is Hacking IMDb Pages?". Hollywoodreporter.com. November 14, 2012. Retrieved February 25, 2013. ^ a b "Java Movie Database (JMDB)". Jmdb.de. Retrieved October 27, 2010. ^ omdb.org ^ "Alternate Interfaces". IMDb. Retrieved January 15, 2007. ^ "IMDbPY". IMDbPY. Retrieved February 14, 2011. ^ Wong, David. "IMDB". Cracked.com. Retrieved February 25, 2013. ^ "Why IMDb's Top 250 Matters...And Why It Doesn't". Screenrant.com. April 13, 2010. Retrieved February 25, 2013. ^ "Top 250 movies as voted by our users". IMDb. Retrieved 10 June 2015. ^ The user votes average on film X is 9.4, so it should appear in your top 250 films listing, yet it doesn't. Why? ^ Norberg, Ragnar (2006). "Credibility Theory". Encyclopedia of Actuarial Science (PDF). mirror ^ "IMDB's statement on their voting calculation". imdb.com. Retrieved 2015-02-05. ^ "IMDB Vote FAQ". IMDB.COM. Retrieved 2015-02-06. ^ "Bottom 100". IMDb. Retrieved March 1, 2007. ^ Each TV episode uses the same message board for the whole series ^ Bahr, Lindsey (October 18, 2011). "Lawsuit against IMDb revealing private information". Insidemovies.ew.com. Retrieved April 25, 2013. ^ "Acting unions criticise IMDb in age row". BBC. October 29, 2011. Retrieved October 29, 2011. ^ "Actress Sued Amazon For Revealing Age 40 Identified As Huong Junie Hoang". News.sky.com. January 7, 2012. Retrieved April 21, 2012. ^ "Actress age claim against IMDb rejected". BBC News. Retrieved April 12, 2013. ^ "Calendar for Seattle, Washington". United States Court of Appeals for the Ninth Circuit. Retrieved November 24, 2014. ^ Gardner, Eriq (February 6, 2015). "Appeals Court Hears the Scary Things That Can Happen to Actors Who Lie to IMDb". Hollywood Reporter. Retrieved February 10, 2015. ^ "Case Docket: United Video Properties Inc., et al v. Amazon.Com Inc. et al". ^ Masnick, Mike (January 12, 2011). "Rovi Sues Amazon for Not Licensing its Electronic TV Guide Patent". ^ Mullin, Joe (November 4, 2013). "Netflix roasts Rovi's 'Interactive TV guide' patents at ITC". Christopher North Tom Szkutak Werner Vogels Gregg Zehr Former: Rick Dalzell Brian Krueger David Risher Ram Shriram Brian Valentine Amapedia Askville CDNOW Endless.com Withoutabox Junglee.com PlanetAll Shelfari Woot.com Product Advertising API Kindle (Fire, Fire HD, Fire HDX) Lexcycle Mobipocket Reflexive Entertainment Fire OS Amazon Digital Game Store Liquavista Amazon Publishing Breakthrough Novel Award Amie Street (Songza) Sellaband Amazon Light Controversies (tax) Perfect 10, Inc. v. Amazon.com, Inc. Statistically improbable phrase winner, 1997 award in the category Film Use mdy dates from November 2014 Articles containing potentially dated statements from July 2015 Articles containing potentially dated statements from September 2015 Vague or ambiguous time from December 2012 Articles containing potentially dated statements from February 2015 Amazon.com acquisitions Entertainment Internet forums Film websites Internet properties established in 1990 Online film databases Social cataloging applications Television websites Webby Award winners American websites Isle of Man, India, Canada, European Union, British Overseas Territories Samsung Electronics, Sony, Seattle, EBay, Fujitsu Linux, Gnu, C (programming language), Berkeley Software Distribution, Os X List of actors who have played multiple roles in the same film Internet Movie Database, Monty Python and the Holy Grail, Nandamuri Balakrishna, Clerks, The Wizard of Oz (1939 film) List of films at the 2008 Sundance Film Festival Internet Movie Database, Canada, United Kingdom, Neil Young, Mark Pellington List of Kannada films of 2013 Internet Movie Database, Romance film, Drama film, Action film, Rangayana Raghu Scouting in popular culture Internet Movie Database, Indiana Jones, Boy Scouts of America, Scouting, Donald Duck Internet Movie Database, Tennessee, Atlanta, Hamilton County, Tennessee, University of Tennessee at Chattanooga
CommonCrawl
Truncated order-8 hexagonal tiling In geometry, the truncated order-8 hexagonal tiling is a semiregular tiling of the hyperbolic plane. It has Schläfli symbol of t{6,8}. Truncated order-8 hexagonal tiling Poincaré disk model of the hyperbolic plane TypeHyperbolic uniform tiling Vertex configuration8.12.12 Schläfli symbolt{6,8} Wythoff symbol2 8 | 6 Coxeter diagram Symmetry group[8,6], (*862) DualOrder-6 octakis octagonal tiling PropertiesVertex-transitive Uniform colorings This tiling can also be constructed from *664 symmetry, as t{(6,6,4)}. Related polyhedra and tilings From a Wythoff construction there are fourteen hyperbolic uniform tilings that can be based from the regular order-6 octagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 7 forms with full [8,6] symmetry, and 7 with subsymmetry. Uniform octagonal/hexagonal tilings Symmetry: [8,6], (*862) {8,6} t{8,6} r{8,6} 2t{8,6}=t{6,8} 2r{8,6}={6,8} rr{8,6} tr{8,6} Uniform duals V86 V6.16.16 V(6.8)2 V8.12.12 V68 V4.6.4.8 V4.12.16 Alternations [1+,8,6] (*466) [8+,6] (8*3) [8,1+,6] (*4232) [8,6+] (6*4) [8,6,1+] (*883) [(8,6,2+)] (2*43) [8,6]+ (862) h{8,6} s{8,6} hr{8,6} s{6,8} h{6,8} hrr{8,6} sr{8,6} Alternation duals V(4.6)6 V3.3.8.3.8.3 V(3.4.4.4)2 V3.4.3.4.3.6 V(3.8)8 V3.45 V3.3.6.3.8 Symmetry The dual of the tiling represents the fundamental domains of (*664) orbifold symmetry. From [(6,6,4)] (*664) symmetry, there are 15 small index subgroup (11 unique) by mirror removal and alternation operators. Mirrors can be removed if its branch orders are all even, and cuts neighboring branch orders in half. Removing two mirrors leaves a half-order gyration point where the removed mirrors met. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors. The symmetry can be doubled to 862 symmetry by adding a bisecting mirror across the fundamental domains. The subgroup index-8 group, [(1+,6,1+,6,1+,4)] (332332) is the commutator subgroup of [(6,6,4)]. A large subgroup is constructed [(6,6,4*)], index 8, as (4*33) with gyration points removed, becomes (*38), and another large subgroup is constructed [(6,6*,4)], index 12, as (6*32) with gyration points removed, becomes (*(32)6). Small index subgroups of [(6,6,4)] (*664) Fundamental domains Subgroup index 1 2 4 Coxeter [(6,6,4)] [(1+,6,6,4)] [(6,6,1+,4)] [(6,1+,6,4)] [(1+,6,6,1+,4)] [(6+,6+,4)] Orbifold *664 *6362 *4343 2*3333 332× Coxeter [(6,6+,4)] [(6+,6,4)] [(6,6,4+)] [(6,1+,6,1+,4)] [(1+,6,1+,6,4)] Orbifold 6*32 4*33 3*3232 Direct subgroups Subgroup index 2 4 8 Coxeter [(6,6,4)]+ [(1+,6,6+,4)] [(6+,6,1+,4)] [(6,1+,6,4+)] [(6+,6+,4+)] = [(1+,6,1+,6,1+,4)] = Orbifold 664 6362 4343 332332 See also Wikimedia Commons has media related to Uniform tiling 8-12-12. • Tilings of regular polygons • List of uniform planar tilings References • John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 19, The Hyperbolic Archimedean Tessellations) • "Chapter 10: Regular honeycombs in hyperbolic space". The Beauty of Geometry: Twelve Essays. Dover Publications. 1999. ISBN 0-486-40919-8. LCCN 99035678. External links • Weisstein, Eric W. "Hyperbolic tiling". MathWorld. • Weisstein, Eric W. "Poincaré hyperbolic disk". MathWorld. • Hyperbolic and Spherical Tiling Gallery • KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings • Hyperbolic Planar Tessellations, Don Hatch Tessellation Periodic • Pythagorean • Rhombille • Schwarz triangle • Rectangle • Domino • Uniform tiling and honeycomb • Coloring • Convex • Kisrhombille • Wallpaper group • Wythoff Aperiodic • Ammann–Beenker • Aperiodic set of prototiles • List • Einstein problem • Socolar–Taylor • Gilbert • Penrose • Pentagonal • Pinwheel • Quaquaversal • Rep-tile and Self-tiling • Sphinx • Socolar • Truchet Other • Anisohedral and Isohedral • Architectonic and catoptric • Circle Limit III • Computer graphics • Honeycomb • Isotoxal • List • Packing • Problems • Domino • Wang • Heesch's • Squaring • Dividing a square into similar rectangles • Prototile • Conway criterion • Girih • Regular Division of the Plane • Regular grid • Substitution • Voronoi • Voderberg By vertex type Spherical • 2n • 33.n • V33.n • 42.n • V42.n Regular • 2∞ • 36 • 44 • 63 Semi- regular • 32.4.3.4 • V32.4.3.4 • 33.42 • 33.∞ • 34.6 • V34.6 • 3.4.6.4 • (3.6)2 • 3.122 • 42.∞ • 4.6.12 • 4.82 Hyper- bolic • 32.4.3.5 • 32.4.3.6 • 32.4.3.7 • 32.4.3.8 • 32.4.3.∞ • 32.5.3.5 • 32.5.3.6 • 32.6.3.6 • 32.6.3.8 • 32.7.3.7 • 32.8.3.8 • 33.4.3.4 • 32.∞.3.∞ • 34.7 • 34.8 • 34.∞ • 35.4 • 37 • 38 • 3∞ • (3.4)3 • (3.4)4 • 3.4.62.4 • 3.4.7.4 • 3.4.8.4 • 3.4.∞.4 • 3.6.4.6 • (3.7)2 • (3.8)2 • 3.142 • 3.162 • (3.∞)2 • 3.∞2 • 42.5.4 • 42.6.4 • 42.7.4 • 42.8.4 • 42.∞.4 • 45 • 46 • 47 • 48 • 4∞ • (4.5)2 • (4.6)2 • 4.6.12 • 4.6.14 • V4.6.14 • 4.6.16 • V4.6.16 • 4.6.∞ • (4.7)2 • (4.8)2 • 4.8.10 • V4.8.10 • 4.8.12 • 4.8.14 • 4.8.16 • 4.8.∞ • 4.102 • 4.10.12 • 4.122 • 4.12.16 • 4.142 • 4.162 • 4.∞2 • (4.∞)2 • 54 • 55 • 56 • 5∞ • 5.4.6.4 • (5.6)2 • 5.82 • 5.102 • 5.122 • (5.∞)2 • 64 • 65 • 66 • 68 • 6.4.8.4 • (6.8)2 • 6.82 • 6.102 • 6.122 • 6.162 • 73 • 74 • 77 • 7.62 • 7.82 • 7.142 • 83 • 84 • 86 • 88 • 8.62 • 8.122 • 8.162 • ∞3 • ∞4 • ∞5 • ∞∞ • ∞.62 • ∞.82
Wikipedia
Recent questions tagged binomial-distribution MadeEasy Test Series: Probability How to get the idea that we have to use Binomial distribution or Hypergeometric Distribution. I know that if the probability is not changing(i.e with replacement) then we go Binomial otherwise Hypergeometric. But in question, it is not indicating ... So is there any by default approach that we have to use Binomial if nothing is a mention about a replacement. asked Jan 9, 2019 in Mathematical Logic by junaid ahmad Loyal (8.6k points) | 49 views made-easy-test-series binomial-distribution Binomial distribution Is there any relation between MEAN, VARIANCE and MODE for binomial distribution? Let, Mean = 8, variance = 6 for any binomial distribution. np = 8 and npq = 6 => q=$3/4$, p=$1/4$ Now is there any relation to find value of MODE ? asked Dec 18, 2018 in Mathematical Logic by shreyansh jain Active (2.2k points) | 90 views Hk Dass In a binomial distribution the sum and the product of mean and variance are $\Large \frac{25}{3}$ and $\Large \frac{50}{3}$ respectively. The distribution is _______. Note : I've not included the options to avoid KBC in comments asked Aug 31, 2018 in Probability by Mk Utkarsh Boss (36.4k points) | 112 views We have applied Bernoulli equation to solve the answer. But, why the answer isn't C(90,5)÷C(100,5)? asked Aug 9, 2018 in Probability by Arjun045 (19 points) | 48 views $a_n = 4^n + 6^n$ If $a_n = 4^n + 6^n$ Find the value of $a_{40} \text { mod } 25$ asked May 19, 2017 in Set Theory & Algebra by dd Veteran (57.2k points) | 138 views Hashing+Probaility asked Oct 8, 2016 in DS by Rahul Jain25 Boss (11.1k points) | 261 views uniform-hashing TIFR2011-A-3 The probability of three consecutive heads in four tosses of a fair coin is. $\left(\dfrac{1}{4}\right)$ $\left(\dfrac{1}{8}\right)$ $\left(\dfrac{1}{16}\right)$ $\left(\dfrac{3}{16}\right)$ $\text{None of the above.}$ asked Oct 17, 2015 in Probability by makhdoom ghaya Boss (30.7k points) | 561 views tifr2011 +13 votes TIFR2010-B-38 Suppose three coins are lying on a table, two of them with heads facing up and one with tails facing up. One coin is chosen at random and flipped. What is the probability that after the flip the majority of the coins(i.e., at least two of them) will have heads facing up? ... $\left(\frac{1}{4}\right)$ $\left(\frac{1}{4}+\frac{1}{8}\right)$ $\left(\frac{2}{3}\right)$ asked Oct 11, 2015 in Probability by makhdoom ghaya Boss (30.7k points) | 1k views Given 10 tosses of a coin with probability of head = .$4$ = ($1$ - the probability of tail), the probability of at least one head is? $(.4)^{10}$ $1 - (.4)^{10}$ $1 - (.6)^{10}$ $(.6)^{10}$ $10(.4) (.6)^{9}$ asked Oct 2, 2015 in Probability by makhdoom ghaya Boss (30.7k points) | 493 views GATE2005-IT-32 An unbiased coin is tossed repeatedly until the outcome of two successive tosses is the same. Assuming that the trials are independent, the expected number of tosses is $3$ $4$ $5$ $6$ asked Nov 3, 2014 in Probability by Ishrat Jahan Boss (16.3k points) | 5.9k views gate2005-it When a coin is tossed, the probability of getting a Head is $p, 0 < p < 1$. Let $N$ be the random variable denoting the number of tosses till the first Head appears, including the toss where the Head appears. Assuming that successive tosses are independent, the expected value of $N$ is $\dfrac{1}{p}$ $\dfrac{1}{(1 - p)}$ $\dfrac{1}{p^{2}}$ $\dfrac{1}{(1 - p^{2})}$ asked Oct 31, 2014 in Probability by Ishrat Jahan Boss (16.3k points) | 2.5k views GATE2005-52 A random bit string of length n is constructed by tossing a fair coin n times and setting a bit to 0 or 1 depending on outcomes head and tail, respectively. The probability that two such randomly generated strings are not identical is: $\frac{1}{2^n}$ $1 - \frac{1}{n}$ $\frac{1}{n!}$ $1 - \frac{1}{2^n}$ asked Sep 21, 2014 in Probability by gatecse Boss (17.5k points) | 2k views gate2005 For each element in a set of size $2n$, an unbiased coin is tossed. The $2n$ coin tosses are independent. An element is chosen if the corresponding coin toss was a head. The probability that exactly $n$ elements are chosen is $\frac{^{2n}\mathrm{C}_n}{4^n}$ $\frac{^{2n}\mathrm{C}_n}{2^n}$ $\frac{1}{^{2n}\mathrm{C}_n}$ $\frac{1}{2}$ asked Sep 17, 2014 in Probability by Rucha Shelke Active (3.3k points) | 2.4k views
CommonCrawl
\begin{document} \bstctlcite{IEEEexample:BSTcontrol} \title{InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks } \makeatletter \newcommand{\linebreakand}{ \end{@IEEEauthorhalign} \mbox{}\par \mbox{} \begin{@IEEEauthorhalign} } \makeatother \author{\IEEEauthorblockN{Yonggan Fu} \IEEEauthorblockA{\textit{Rice University} \\ [email protected]} \and \IEEEauthorblockN{Zhongzhi Yu} \IEEEauthorblockA{\textit{Rice University} \\ [email protected]} \and \IEEEauthorblockN{Yongan Zhang} \IEEEauthorblockA{\textit{Rice University} \\ [email protected]} \and \IEEEauthorblockN{Yifan Jiang} \IEEEauthorblockA{\textit{University of Texas at Austin} \\ [email protected]} \and \IEEEauthorblockN{Chaojian Li} \IEEEauthorblockA{\textit{Rice University} \\ [email protected]} \linebreakand \IEEEauthorblockN{Yongyuan Liang} \IEEEauthorblockA{\textit{Sun Yat-sen University} \\ [email protected]} \and \IEEEauthorblockN{Mingchao Jiang} \IEEEauthorblockA{\textit{Rice University} \\ [email protected]} \and \IEEEauthorblockN{Zhangyang Wang} \IEEEauthorblockA{\textit{University of Texas at Austin} \\ [email protected]} \and \IEEEauthorblockN{Yingyan Lin} \IEEEauthorblockA{\textit{Rice University} \\ [email protected]} } \maketitle \begin{abstract} The promise of Deep Neural Network (DNN) powered Internet of Thing (IoT) devices has motivated a tremendous demand for automated solutions to enable fast development and deployment of efficient (1) DNNs equipped with instantaneous accuracy-efficiency trade-off capability to accommodate the time-varying resources at IoT devices and (2) dataflows to optimize DNNs' execution efficiency on different devices. Therefore, we propose InstantNet to automatically generate and deploy instantaneously switchable-precision networks which operate at variable bit-widths. Extensive experiments show that the proposed InstantNet consistently outperforms state-of-the-art designs. Our codes are available at: \underline{\href{https://github.com/RICE-EIC/InstantNet}{https://github.com/RICE-EIC/InstantNet}}. \end{abstract} \begin{IEEEkeywords} switchable-precision networks, NAS, dataflow \end{IEEEkeywords} \section{Introduction} Powerful deep neural networks (DNNs)' prohibitive complexity calls for hardware efficient DNN solutions \cite{eyeriss,10.1145/3210240.3210337,8050797}. When it comes to DNNs' hardware efficiency in IoT devices, the model complexity (e.g., bit-widths), dataflows, and hardware architectures are major performance determinators. Early works mostly provide \textit{static} solutions, i.e., once developed, the algorithm/dataflow/hardware are fixed, whereas IoT applications often have dynamic time/energy constraints over time. Recognizing this gap, recent works \cite{jin2019adabits,guerra2020switchable} have attempted to develop efficient DNNs with instantaneous accuracy-cost trade-off capability. For example, switchable-precision networks (SP-Nets) \cite{jin2019adabits,guerra2020switchable} can maintain a competitive accuracy under different bit-widths without fine-tuning under each bit-width, making it possible to allocate bit-widths on the fly for adapting IoT devices' instant resources over time. Despite SP-Nets' great promise \cite{jin2019adabits,guerra2020switchable}, there are still major challenges in enabling their deployment into numerous IoT devices. First, existing SP-Nets are manually designed, largely limiting their extensive adoption as \textit{each application} would require a \textit{different} SP-Net. Second, while the best dataflow for SP-Nets under \textit{different bit-widths} can be different and is an important determinator for their on-device efficiency \cite{venkatesanmagnet}, there is still a lack of a generic and publicly available framework that can be used to suggest optimal dataflows for SP-Nets under \textit{each of their bit-widths} on \textit{different IoT devices}. Both of the aforementioned hinder the fast development and deployment of SP-Nets powered DNN solutions for \textit{diverse} hardware platforms of IoT devices. To tackle the aforementioned challenges, we make the following contributions: \begin{itemize} \item We propose InstantNet, an end-to-end framework that automates the development (i.e., the generation of SP-Nets given a dataset and target accuracy) and deployment (i.e., the generation of the optimal dataflows) of SP-Nets. To our best knowledge, InstantNet is \textbf{the first} to simultaneously target both development and deployment of SP-Nets. \item We develop switchable-precision neural architecture search (SP-NAS) that integrates an novel cascade distillation training to ensure that the generated SP-Nets under all bit-widths achieve the same or better accuracy than both \textit{NAS generated} DNNs optimized for individual bit-widths and SOTA \textit{expert-designed} SP-Nets. \item We propose AutoMapper, which integrates a generic dataflow space and an evolutionary algorithm to navigate over the discrete and large mapping-method space and automatically search for optimal dataflows given a DNN (e.g., SP-Nets under a selected bit-width) and target device. \item Extensive experiments based on real-device measurements and hardware synthesis validate InstantNet's effectiveness in consistently outperforming SOTA designs, e.g., achieving 84.68\% real-device Energy-Delay-Product improvement while boosting the accuracy by 1.44\%, over the most competitive competitor under the same settings. \end{itemize} \section{Related works} \textbf{Static and switchable-precision DNNs.} DNN quantization aims to compress DNNs at the most fine-grained bit-level~\cite{fractrain, fu2021cpt}. To accommodate constrained and time-varying resources on IoT devices, SP-Nets~\cite{jin2019adabits, guerra2020switchable} aim for instantaneously switchable accuracy-efficiency trade-offs at the bit-level. However, designing such DNNs and the corresponding mapping methods for every scenario can be engineering-expensive and time consuming, considering the ever-increasing IoT devices with diverse hardware platforms and application requirements. As such, techniques that enable fast development and deployment of SP-Nets are highly desirable for expediting the deployment of affordable DNNs into numerous IoT devices. \textbf{Neural Architecture Search for efficient DNNs.} To release human efforts from laborious manual design, NAS~\cite{zoph2016neural, fu2020autogandistiller} have been introduced to enable the automatic search for efficient DNNs with both competitive accuracy and hardware efficiency given the datasets. \cite{wang2019haq, chen2018joint, wu2018mixed} incorporate quantization bit-widths into their search space and search for mixed-precision networks. However, all these NAS methods search for quantized DNNs with only one \textit{fixed} bit-width, lacking the capability to instantly adapt to other bit-widths without fine-tuning. \textbf{Mapping DNNs to devices/hardware.} When deploying DNNs into IoT devices with diverse hardware architectures, one major factor that determines hardware efficiency is the dataflow \cite{venkatesanmagnet}. For devices with application-specific integrated circuit (ASIC) or FPGA hardware, various innovative dataflows \cite{eyeriss, Optimize_fpga_for_DNN, zhang2018dnnbuilder,10.1109/ISCA45697.2020.00082} have been developed to maximize the reuse opportunities. Recently, MAGNet has been proposed to automatically identify optimal dataflows and design parameters of a tiled architecture. However, its highly template-based design space, e.g., a pre-defined set of nested loop-orders, can restrict the generality and result in sub-optimal performance. Despite its promising performance, the exploration to automatically identify optimal mapping methods for DNNs with different bit-widths has not yet been considered. \begin{figure} \caption{Overview of InstantNet, which first generates SP-Nets with high accuracy under all bit-widths, and then suggests dataflows to maximize the generated SP-Nets' execution efficiency under different bit-widths on the target device. } \label{fig:overview} \end{figure} \section{The proposed InstantNet framework} Here we present our InstantNet framework, starting from an overview and then its key enablers including cascade distillation training (CDT), SP-NAS, and AutoMapper. \subsection{InstantNet overview} \label{sec:overview} Fig.~\ref{fig:overview} shows an overview of InstantNet. Specifically, given the target application and device, it automates the development and deployment of SP-Nets. Specifically, InstantNet integrates two key enablers: (1) SP-NAS and (2) AutoMapper. SP-NAS incorporates an innovative cascade distillation to search for SP-Nets, providing IoT devices' desired instantaneous accuracy-efficiency trade-off capability. AutoMapper adopts a generic dataflow design space and an evolution-based algorithm to automatically search for optimal dataflows of SP-Nets under different bit-widths on the target device. \begin{figure*} \caption{Visualizing the prediction distribution of MobileNetV2 on CIFAR-100 under \textbf{(left)}: 4-bit training with vanilla distillation, \textbf{(middle)} 4-bit training with the proposed CDT, and \textbf{(right)} 32-bit training.} \label{fig:output} \end{figure*} \subsection{InstantNet training: Bit-Wise Cascade Distillation} \label{sec:cdt} Unlike generic quantized DNNs optimized to maximize accuracy under one individual bit-width, InstantNet aims to generate SP-Nets of which the accuracy \textit{under all bit-widths} are the same or even higher than that of DNNs customized for individual bit-widths. The key challenge is to ensure high accuracy for lower bit-widths, which is particularly difficult for compact DNN models whose accuracy is more sensitive to quantization. For example, SOTA SP-Nets ~\cite{guerra2020switchable} fails to work on lower bit-widths when being applied to MobileNetV2~\cite{sandler2018mobilenetv2}. The above challenge has motivated InstantNet's CDT method, which takes advantage of the fact that the quantization noises of SP-Nets under adjacent or closer bit-widths are smaller. Our hypothesis is that distillation between adjacent and closer bit-widths will help to more smoothly enforce the accuracy (or activation distribution) of SP-Nets under low bit-widths to approach their full-precision counterparts. In this way, CDT can simultaneously boost accuracy of SP-Nets under all bit-widths by enforcing SP-Nets under each bit-width to have distillation from \textbf{all higher bit-widths}: \begin{equation} \begin{split} L_{total} &= \frac{1}{N} \sum_{i=0}^{N-1} L_{train}^{cas}(Q_i(\omega)) \\ where \; L_{train}^{cas}&(Q_i(\omega)) = L_{ce}(Q_i(\omega), label) \\ + \beta \sum_{j=i+1}^{N-1}& L_{mse}(Q_i(\omega), SG(Q_j(\omega))) \end{split} \label{eqn:cdt} \end{equation} \noindent where $L_{total}$ is SP-Nets' average loss under all the $N$ candidate bit-widths, $L_{ce}$ and $L_{mse}$ are the cross-entropy and mean square error losses, respectively, $Q_i(\omega)$ is the SP-Net characterized with weights $\omega$ under the $i$-th bit-width, $\beta$ is a trade-off parameter, and $SG$ is the stopping gradient function, i.e., gradient backpropagation from higher bit-widths is prohibited when calculating the distillation loss~\cite{guerra2020switchable}. To verify the effectiveness of CDT, we visualize the prediction distribution (classification probability after softmax) of MobileNetV2 on CIFAR-100 under the bit-width set of 4, 8, 12, 16, 32 (quantized by SBM~\cite{banner2018scalable}) trained using different strategies in Fig.~\ref{fig:output}. We show the prediction distribution of the following three cases using a random sampled image from the test dataset to verify and visualize the effectiveness of our CDT: (1) 4-bit trained using vanilla distillation, i.e., only consider the distillation with 32-bit width, (2) 4-bit trained using our CDT technique and (3) the 32-bit trained network. We can observe that vanilla distillation fails to narrow the gap between 32-bit and the lowest 4-bit due to the large quantization error gap. This is actually a common phenomenon among efficient models with depthwise layers which are sensitive to low precision on all the considered test datasets, e.g., we observe that the validation accuracy of the 4-bit network with only the aforementioned vanilla distillation is around 1\%, indicating the failure of vanilla distillation for tackling the bit-width set with a large dynamic range. In contrast, our CDT notably helps the prediction distribution of the 4-bit network smoothly evolve to that of the 32-bit one, and also boost its accuracy to 71.21\%, verifying CDT's effectiveness. \subsection{InstantNet search: Switchable-Precision NAS } \label{sec:banas} Here we introduce another key enabler of InstantNet, SP-NAS. To our best knowledge, InstantNet is \textbf{the first} to address \textit{how to automatically generate networks which naturally favor working under various bit-widths}. In addition, to resolve the performance bottleneck in SOTA SP-Nets (manually designed) ~\cite{jin2019adabits, guerra2020switchable}, i.e., large accuracy degradation under the lowest bit-width, we develop a heterogeneous scheme for updating the weights and architecture parameters. Specifically, we update the weights based on our CDT method (see Eq.~\ref{eqn:cdt}) which explicitly incorporates switchable-bit property into the training process; and for updating the architecture parameters of SP-Net, we adopt \textit{only the weights under the lowest bit-width}, for generating networks forced to inherently tackle SP-Nets' bottleneck of high accuracy loss under the lowest bit-width: \begin{equation} \label{eqn:banas} \begin{split} & \min \limits_{\alpha} L_{val}(Q_0(\omega^*), \alpha)+\lambda L_{eff}(\alpha) \\ & s.t. \quad \omega^* = \underset{\omega}{\arg\min} \,\, \frac{1}{N} \sum_{i=0}^{N-1} L_{train}^{cas}(Q_i(\omega), \alpha) \end{split} \end{equation} \noindent where $\omega$ and $ \alpha$ are the supernet's weights \cite{liu2018darts} and architecture parameters, respectively, $L_{eff}$ is an efficiency loss (e.g., energy cost), and $Q_0(\omega)$ is the SP-Net under the lowest bit-width. Without loss of generality, here we adopt SOTA differentiable NAS~\cite{liu2018darts} and search space~\cite{wu2019fbnet}. \begin{figure*} \caption{Overview of the goal, generic dataflow space, and InstantNet's AutoMapper, where TBS denotes ``to be searched''.} \label{fig:map_overview} \end{figure*} \subsection{InstantNet deploy: Evolution-based AutoMapper} This subsection introduces InstantNet's AutoMapper, of which an overview is shown in Fig.~\ref{fig:map_overview}. Motivated by the fact that different mapping methods can have orders-of-magnitude difference in hardware efficiency \cite{venkatesanmagnet}, AutoMapper aims to accept (1) DNNs (e.g., SP-Nets generated by our SP-NAS), (2) the target device, and (3) target hardware efficiency, and then generate mapping methods that maximize both the task accuracy and hardware efficiency of the given SP-Nets under all bit-widths when being executed on the target device. \textbf{Generic Dataflow Design Space.} A generic dataflow design space is a prerequisite for effective algorithmic exploration and optimization of on-device dataflows, yet is challenging to develop. There are numerous choices for how to temporally and spatially schedule all the DNN's operations to be executed in the target accelerators. Specifically, as there are many more operations in DNNs than the number of operations (e.g., $19.6E+9$ \cite{simonyan2014very} vs. 900 MACs \cite{xilinxzc706} assuming a 16-bit precision) an IoT device can execute in each clock cycle, numerous possible dataflows exist for running DNNs on a device. To tackle the aforementioned challenge, we propose a generic design space for on-device dataflows, which (1) covers all design choices for generalization and (2) is easy to understand for ease of adoption. Our proposed space leverages commonly used nested \textit{for-loop} descriptions~\cite{eyeriss,DNNCHIPPREDICTOR}. For better illustration, here we describe the high-level principles. From a nested \textit{for-loop} description, our dataflow space extracts all possible choices characterized by the following factors: \textit{loop-order}: the processing order of each dimension within each memory hierarchy, and can be derived from all possible permuted choices without overlap. \textit{loop-size}: the no. of operations in one iteration of a specific dimension, which can not be easily determined. We design a simple analytical algorithm to derive all possible choices. \textit{Pipeline/multi-cycle:} use pipeline or multi-cycle. The former processes a small chunk of each layer in a pipeline manner, while the latter processes all the layers sequentially. Considering AlexNet \cite{krizhevsky2012imagenet} and six layers of nested loops, there are over \textbf{$10^{27}$ total number of discrete mapping-method choices}, posing a great need for developing efficient and effective search algorithms. \begin{figure} \caption{Evolutionary AutoMapper} \label{alg:eaalg} \end{figure} \textbf{Evolutionary Search Algorithm.} To navigate the large and discrete space of mapping methods, we adopt an evolutionary based search algorithm, considering that evolutionary algorithms have more exploitation than random search and are better suited for the highly discrete space \cite{google_ev,genesys}. Specifically, we will keep track of the hardware efficiency ranking of the current sampled mapping methods at each iteration. Afterwards, if the pool size of current samples is smaller than a specified value, we select a few of the best performing sampled mapping methods and randomly perturb a small number of their features associated with the aforementioned design factors to generate new mapping methods to be evaluated in the next iteration; otherwise, new mapping methods with completely randomly selected design factors will be generated. We summarize our Evolutionary AutoMapper in Alg.\ref{alg:eaalg}. \begin{table}[!t] \centering \caption{InstantNet's CDT over SBM~\cite{banner2018scalable} (SOTA training for quantized DNNs) and SOTA SP-Nets (SP~\cite{guerra2020switchable} and AdaBits~\cite{jin2019adabits}) on \textbf{MobileNetV2} and CIFAR-100 in terms of test accuracy (\%), where the values in the bracket represent the accuracy drop of the baseline methods compared to our CDT. } \resizebox{0.95\linewidth}{!}{ \begin{tabular}{ccccc} \toprule \multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM~\cite{banner2018scalable}} & \multicolumn{1}{c}{SP~\cite{guerra2020switchable}} & \multicolumn{1}{c}{AdaBits~\cite{jin2019adabits}} & CDT (Proposed) \\ \midrule 4 & 70.55 \textbf{(-0.60)} & 66.75 \textbf{(-4.40)} & 68.07 \textbf{(-3.08)} & \textbf{71.15}\\ 8 & 74.40 \textbf{(-0.72)} & 71.69 \textbf{(-3.43)} & 73.86 \textbf{(-1.26)} & \textbf{75.12} \\ 12 & 74.87 \textbf{(-0.16)} & 74.16 \textbf{(-0.87)} & 73.65 \textbf{(-1.38)} & \textbf{75.03} \\ 16 & 75.03 \textbf{(-0.19)} & 74.23 \textbf{(-0.99)} & 73.87 \textbf{(-1.35)} & \textbf{75.22} \\ 32 & 75.23 \textbf{(+0.25)} & 74.11 \textbf{(-0.87)} & 74.51 \textbf{(-0.47)} & \textbf{74.98} \\ \midrule \midrule 4 & 70.55 \textbf{(-0.53)} & 67.63 \textbf{(-3.45)} & 68.37 \textbf{(-2.71)} & \textbf{71.08} \\ 5 & 74.13 \textbf{(-0.32)} & 72.95 \textbf{(-1.50)} & 73.52 \textbf{(-0.93)} & \textbf{74.45} \\ 6 & 74.69 \textbf{(-0.33)} & 74.15 \textbf{(-0.87)} & 74.60 \textbf{(-0.42)} & \textbf{75.02} \\ 8 & 74.40 \textbf{(-0.64)} & 74.99 \textbf{(-0.05)} & 75.02 \textbf{(-0.02)} & \textbf{75.04} \\ \bottomrule \end{tabular} } \label{tab:cascade} \end{table} \section{Experiment results} We first describe our experiment setup and then evaluate each enabler of InstantNet, i.e., CDT, SP-NAS, and AutoMapper. After that, we benchmark InstantNet over SOTA SP-Nets on SOTA accelerators \cite{zhang2018dnnbuilder, XilinxCH65, eyeriss}. \subsection{Experiment setup} \label{sec:exp_setup} \subsubsection{Algorithm experiment setup} \textbf{Datasets \& Baselines.} We consider three datasets (CIFAR-10/CIFAR-100/ImageNet), and evaluate InstantNet over (1) all currently published SP-Nets (AdaBits~\cite{jin2019adabits} and SP~\cite{guerra2020switchable}) with the DoReFa~\cite{zhou2016dorefa} quantizer and (2) a SOTA quantized DNN method SBM~\cite{banner2018scalable} to train a SOTA compact DNN MobileNetV2~\cite{sandler2018mobilenetv2} under individual bits. \textbf{Search and training on CIFAR-10/100 and ImageNet.} \underline{Search space:} we adopt the same search space as~\cite{wu2019fbnet} except the stride settings for each group to adapt to the resolution of the input images in CIFAR-10/100. \underline{Search settings.} On CIFAR-10/100, we search for 50 epochs with batch size 64. In particular, we (1) update the supernet weights with our cascade distillation technique as in Eq.(2) on half of the training dataset using an SGD optimizer with a momentum of 0.9 and an initial learning rate (LR) 0.025 at a cosine decay, and (2) update network architecture parameters with the lowest bit-width as in Eq.(2) on the other half of the training dataset using an Adam optimizer with a momentum of 0.9 and a fixed LR 3e-4. We apply gumbel softmax on the architecture parameters as the contributing coefficients of each option to the supernet (following~\cite{wu2019fbnet}), where the initial temperature is 3 and then decayed by 0.94 at each epoch. On ImageNet, we follow the same hyper-parameter settings for the network search as~\cite{wu2019fbnet}. \underline{Evaluate the derived networks:} for training the derived networks from scratch using our CDT, on CIFAR-10/100 we adopt an SGD optimizer with a momentum of 0.9 and an initial LR 0.025 at a cosine decay. Each network is trained for 200 epochs with batch size 128. On ImageNet, we follow~\cite{wu2019fbnet}. \subsubsection{Hardware experiment setup} \textbf{Implementation methodology.} We consider two commonly used IoT hardware platforms, i.e., ASIC and FPGA, for evaluating our AutoMapper. Specifically, for FPGA, we adopt the Vivado HLx design tool-flow where we first synthesize the mapping-method design in C++ via Vivado HLS, and then plug the HLS exported IPs into a Vivado IP integrator to generate the corresponding bit streams, which are programmed into the FPGA board for on-board execution and measurements; for ASIC, we synthesize the Verilog designs based on the generated dataflows using a Synopsys Design Compiler on a commercial CMOS technology, and then place and route using a Synopsys IC Compiler for obtaining the resulting design's actual area. \textbf{Baselines.} We evaluate AutoMapper over expert/tool generated SOTA dataflows for both FPGA and ASIC platforms, including DNNBuilder~\cite{zhang2018dnnbuilder} and CHaiDNN~\cite{XilinxCH65} for FPGA, and Eyeriss~\cite{eyeriss} and MAGNet~\cite{venkatesanmagnet} for ASIC. For DNNBuilder~\cite{zhang2018dnnbuilder}, MAGNet~\cite{venkatesanmagnet} and CHaiDNN~\cite{XilinxCH65}, we use their reported results; For Eyeriss~\cite{eyeriss}, we use their own published and verified simulator~\cite{Gao2017Tetris} to obtain their results. \subsection{Ablation study of InstantNet: CDT} \label{sec:exp_cd} \textbf{Experiment settings.} For evaluating InstantNet's CDT, we benchmark it over an SOTA quantized DNN training method (independently train DNNs at each bit-width) and two SP-Nets (AdaBits~\cite{jin2019adabits} and SP~\cite{guerra2020switchable}). In light of our IoT application goal, we consider MobileNetV2~\cite{sandler2018mobilenetv2} (an SOTA efficient model balancing task accuracy and hardware efficiency) with CIFAR-100, and adopt two different bit-width sets with both large and narrow bit-width dynamic ranges. Without losing generality, our CDT is designed with SOTA quantizer SBM~\cite{banner2018scalable} and switchable batch normalization as in SP~\cite{guerra2020switchable}. \textbf{Results and analysis.} From Tab.~\ref{tab:cascade}, we have three observations: (1) our CDT consistently outperforms the two SP-Net baselines under all the bit-widths, verifying CDT's effectiveness and our hypothesis that progressively distilling from all higher bit-widths can help more smoothly approach accuracy of the full-precision; (2) CDT is particularly capable of boosting accuracy in low bit-widths which has been shown to be the bottleneck in exiting SP-Nets~\cite{jin2019adabits}, e.g., a 2.71\%$\sim$4.4\% higher accuracy on the lowest 4-bit over the two SP-Net baselines; and (3) CDT always achieves a higher or comparable accuracy over the SOTA quantized DNN training method SBM that independently trains and optimizes each individual bit-width: for bit-widths ranging from 4-bit to 8-bit, CDT achieves 0.32\%$\sim$0.72\% improvement in accuracy over SBM, indicating the effectiveness of our CDT in boosting DNNs' accuracies under lower bit-widths. \begin{table}[!t] \centering \caption{CDT over independently trained SBM~\cite{banner2018scalable} on \textbf{ResNet-38}, where the values in the bracket represent CDT's accuracy gain over SBM (the higher, the better)}. \resizebox{0.8\linewidth}{!}{ \begin{tabular}{ccc|cc} \toprule Dataset & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \midrule \multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} \\ \midrule 4 & 90.91 & \textbf{91.45 (+0.54)} & 63.82 & \textbf{64.18 (+0.36)} \\ 8 & 92.78 & \textbf{93.03 (+0.25)} & 66.71 & \textbf{67.45 (+0.74)} \\ 12 & 92.75 & \textbf{93.06 (+0.31)} & 67.13 & \textbf{67.42 (+0.29)} \\ 16 & 92.90 & \textbf{93.09 (+0.19)} & 67.17 & \textbf{67.50 (+0.33)} \\ 32 & 92.5 \ & \textbf{93.08 (+0.58)} & 67.18 & \textbf{67.47 (+0.29)} \\ \midrule \midrule 4 & 90.91 & \textbf{91.88 (+0.97)} & 63.82 & \textbf{64.12 (+0.30)} \\ 5 & 92.35 & \textbf{92.56 (+0.21)} & 66.20 & \textbf{66.68 (+0.48)} \\ 6 & 92.80 & \textbf{92.93 (+0.13)} & 66.48 & \textbf{66.55 (+0.07)} \\ 8 & 92.78 & \textbf{93.02 (+0.24)} & 66.71 & \textbf{66.88 (+0.17)} \\ \bottomrule \end{tabular} } \label{tab:resnet38} \end{table} \begin{table}[!t] \centering \caption{CDT over independently trained SBM~\cite{banner2018scalable} on \textbf{ResNet-74}, where the values in the bracket represent CDT's accuracy gain over SBM (the higher, the better). } \resizebox{0.8\linewidth}{!}{ \begin{tabular}{cccccccc} \toprule Dataset &\multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \midrule \multicolumn{1}{c}{Bit-widths} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} & \multicolumn{1}{c}{SBM} & \textbf{CDT (Proposed)} \\ \midrule 4 & 91.82 & \textbf{92.34 (+0.52)} & 66.31 & \textbf{67.35 (+1.04)} \\ 8 & 93.22 & \textbf{93.56 (+0.34)} & 69.85 & \textbf{69.98 (+0.13)} \\ 12 & 93.26 & \textbf{93.53 (+0.27)} & 69.97 & \textbf{69.99 (+0.02)} \\ 16 & 93.40 & \textbf{93.51 (+0.11)} & 69.92 & \textbf{70.01 (+0.09)} \\ 32 & 93.38 & \textbf{93.49 (+0.11)} & 69.46 & \textbf{69.98 (+0.52)} \\ \midrule \midrule 4 & 91.82 & \textbf{92.51 (+0.69)} & 66.31 & \textbf{67.34 (+1.03)} \\ 5 & 92.98 & \textbf{93.54 (+0.56)} & 68.66 & \textbf{69.49 (+0.83)} \\ 6 & 93.19 & \textbf{93.47 (+0.28)} & 69.42 & \textbf{69.65 (+0.23)} \\ 8 & 93.22 & \textbf{93.72 (+0.50)} & 69.85 & \textbf{70.02 (+0.17)} \\ \bottomrule \end{tabular} } \label{tab:resnet74} \end{table} \begin{table}[btp] \centering \caption{CDT over SP~\cite{banner2018scalable} on ResNet-18 and TinyImageNet in terms of test accuracy, where the values in the bracket represent CDT's accuracy gain over SBM. } \begin{tabular}{cc|cc} \toprule \multicolumn{2}{c}{Bit-widths} & \multicolumn{2}{c}{Methods} \\ \midrule Weight & Activation & SP & \textbf{CDT (Proposed)} \\ \midrule 2 & 2 & 47.8 & \textbf{52.3 (+4.5)} \\ 2 & 32 & 50.5 & \textbf{51.3 (+0.8)} \\ 32 & 2 & 51.8 & \textbf{53.4 (+1.6)} \\ \bottomrule \end{tabular} \label{tab:tinyimagenet} \end{table} We also benchmark CDT on ResNet-38/74~\cite{wang2018skipnet} with CIFAR-10/CIFAR-100 over independently trained SBM~\cite{banner2018scalable}. As shown in Tab.~\ref{tab:resnet38} and~Tab.~\ref{tab:resnet74} for ResNet-38 and ResNet-74, respectively, CDT consistently achieves a better/comparable accuracy (0.02\%$\sim$1.04\%) over the independently trained ones under all the models/datasets/bit-widths, and notably boosts the accuracy of the lowest bit-width (4-bit) by 0.30\%$\sim$1.04\%. To evaluate CDT's performance when involving extremely low bit-width (2-bit), we further benchmark CDT on ResNet-18~\cite{he2016deep} and TinyImageNet~\cite{le2015tiny} over the SP~\cite{banner2018scalable} baseline. The results are shown in Tab.~\ref{tab:tinyimagenet}. It can be observed that the CDT is particularly effective in boosting the accuracy in lower bit-widths. Specifically, when the weights and activations both adopt 2-bit, the proposed CDT achieves a 4.5\% higher accuracy than that of the baseline SP method. \begin{figure} \caption{InstantNet's SP-NAS over Full-Precision-NAS (FP-NAS) and Low-Precision-NAS (LP-NAS) on CIFAR-100 under large, middle, and small FLOPs constraints trained for two bit-width sets: (a) [4, 8, 12, 16, 32], and (b) [4, 5, 6, 8].} \label{fig:exp_spnas} \end{figure} \subsection{Ablation study of InstantNet: SP-NAS} \label{sec:exp_nas} From Fig.~\ref{fig:exp_spnas}, we can see that: (1) SP-NAS consistently outperforms the baselines at the lowest bit-width, which is the bottleneck in SOTA SP-Nets~\cite{jin2019adabits}, while offering a higher/comparable accuracy at higher bit-widths. Specifically, SP-NAS achieves a 0.71\%$\sim$1.16\% higher accuracy over the strongest baseline at the lowest bit-width on both bit-width sets under the three FLOPs constraints; and (2) SP-NAS shows a notable superiority on the bit-width set with a larger dynamic range which is more favorable for IoT applications as larger bit-width dynamic ranges provide more flexible instantaneous accuracy-efficiency trade-offs. Specifically, compared with the strongest baseline, SP-NAS achieves a 1.16\% higher accuracy at the lowest bit-width and a 0.25\%$\sim$0.61\% higher accuracy at other bit-widths, while offering a 24.9\% reduction in FLOPs on the bit-width set [4, 8, 12, 16, 32]. This experiment validates that SP-NAS can indeed effectively tackle SP-Nets' bottleneck and improve its scalability over previous search methods which fail to guarantee accuracy at lower bit-widths. \begin{figure} \caption{AutoMapper over SOTA expert-crafted and tool generated dataflows on FPGA/ASIC.} \label{fig:exp_automapper} \end{figure} \subsection{Ablation study of InstantNet: AutoMapper} \label{sec:exp_mapping} As shown in Fig.~\ref{fig:exp_automapper}, we can see that (1) the dataflows suggested by AutoMapper (taking less than 10 minutes of search time) even outperforms SOTA expert-crafted designs: the mapping generated by AutoMapper achieves 65.76\% and 85.74\% EDP reduction on AlexNet~\cite{krizhevsky2012imagenet} and VGG16~\cite{simonyan2014very} compared with Eyeriss~\cite{eyeriss}, respectively; (2) AutoMapper achieves a higher cost savings on ASIC than that of FPGA. This is because ASIC designs are more flexible than FPGA in their dataflows and thus achieve superior performance when exploring using effective automated search tools; and (3) when comparing with MAGNet, we have roughly 9.3\% reduction in terms of the energy cost. MAGNet only used a pre-defined set of loop-orders to cover different dataflow scenarios, which may not generically fit network's diverse layer structures, thus resulting in inferior performance. \begin{figure} \caption{InstantNet generated and SOTA IoT systems on CIFAR-10/100 under two bit-width sets. } \label{fig:exp_final_cifar} \end{figure} \subsection{InstantNet over SOTA systems} \label{sec:exp_sota} \begin{wrapfigure}{r}{0.25\textwidth} \begin{center} \includegraphics[width=0.25\textwidth]{Figs/exp_final_imagenet.pdf} \end{center} \caption{InstantNet and SOTA IoT systems on ImageNet with bit-widths of $[4, 5, 6, 8]$.} \label{fig:exp_final_imagenet} \end{wrapfigure} \textbf{Results and analysis on CIFAR-10/100.} As shown in Fig.~\ref{fig:exp_final_cifar}, we can see that (1) InstantNet generated systems consistently outperforms the SOTA baselines in terms of the trade-off between accuracy and EDP (a commonly-used hardware metric for ASIC) by achieving a higher or comparable accuracy and better EDP under lower bit-widths over the baselines. In particular, InstantNet can achieve up to 84.67\% reduction in EDP with a 1.44\% higher accuracy on CIFAR-100 and the bit-width set of $[4, 8, 12, 16, 32]$; and (2) InstantNet always surpasses the SOTA baselines under the bottleneck bit-width, i.e., the lowest one, with a 62.5\%$\sim$73.68\% reduction in EDP and a 0.91\%$\sim$5.25\% higher accuracy, which is notably more practical for real-world IoT deployment. \textbf{Results and analysis on ImageNet.} As shown in Fig.~\ref{fig:exp_final_imagenet}, InstantNet generated system achieves a $1.86\times$ improvement in Frame-Per-Second (FPS) while having a comparable accuracy (-0.05\%) over the SOTA FPGA based IoT system. \section{Conclusion} We propose an \textit{automated} framework termed \textbf{InstantNet} to automatically search for SP-Nets (i.e., capable of operating at variable bit-widths) that can achieve the same or even better accuracy than DNNs optimized for individual bit-widths, and to generate optimal dataflows to maximize efficiency when DNNs are executed under various bit-widths on different devices. Extensive experiments show that InstantNet has promised an effective automated framework for expediting development and deployment of efficient DNNs for numerous IoT applications with diverse specifications. \end{document}
arXiv
\begin{document} \title{Nonlinear Acceleration of Momentum and Primal-Dual Algorithms} \author{Raghu Bollapragada} \address[Raghu Bollapragada:]{Corresponding author. \\ Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL, USA. \\The author was a PhD student in the department of Industrial Engineering and Management Sciences at Northwestern University, IL, USA, when this work was done.} \email[corresponding author]{[email protected]} \author{Damien Scieur} \address[Damien Scieur:]{SAMSUNG SAIL, Montreal, Canada.\\ This author was a PhD student at INRIA \& D.I., UMR 8548, \'Ecole Normale Sup\'erieure, Paris, France, when this work was done.} \email{[email protected]} \author{Alexandre d'Aspremont} \address[Alexandre d'Aspremont:]{CNRS \& D.I., UMR 8548, \'Ecole Normale Sup\'erieure, Paris, France.} \email{[email protected]} \keywords{} \date{\today} \subjclass[2010]{} \begin{abstract} We describe convergence acceleration schemes for multistep optimization algorithms. The extrapolated solution is written as a nonlinear average of the iterates produced by the original optimization method. Our analysis does not need the underlying fixed-point operator to be symmetric, hence handles e.g. algorithms with momentum terms such as Nesterov's accelerated method, or primal-dual methods. The weights are computed via a simple linear system and we analyze performance in both online and offline modes. We use Crouzeix's conjecture to show that acceleration performance is controlled by the solution of a Chebyshev problem on the numerical range of a non-symmetric operator modeling the behavior of iterates near the optimum. Numerical experiments are detailed on logistic regression problems. \end{abstract} \maketitle \oldsection{Introduction}\label{s:intro} Extrapolation techniques, such as Aitken's~$\Delta^2$ or Wynn's $\varepsilon$-algorithm, provide an improved estimate of the limit of a sequence using its last few iterates, and we refer the reader to \citep{brezinski2013extrapolation} for a complete survey. These methods have been extended to vector sequences, where they are known under various names, e.g. Anderson acceleration \citep{anderson1965iterative,walker2011anderson}, minimal polynomial extrapolation \citep{cabay1976polynomial} or reduced rank extrapolation \citep{eddy1979extrapolating}. Classical optimization algorithms typically retain only the last iterate or the average of iterates \citep{polyak1992acceleration} as their best estimate of the optimum, throwing away all the information contained in the converging sequence. This is highly wasteful from a statistical perspective and extrapolation schemes estimate instead the optimum using a weighted average of the last iterates produced by the underlying algorithm, where the weights depend on the iterates (i.e. a {\em nonlinear} average). Overall, computing those weights means solving a small linear system so nonlinear acceleration has marginal computational complexity. Recent results by \citep{scieur2016regularized} adapted classical extrapolation techniques related to Aitken's~$\Delta^2$, Anderson's method and minimal polynomial extrapolation to design extrapolation schemes for accelerating the convergence of basic optimization methods such as gradient descent. They showed that using only iterates from fixed-step gradient descent, extrapolation algorithms achieve the optimal convergence rate of \citep{nesterov2013introductory} {\em without any modification to the original algorithm}. However, these results are only applicable to iterates produced by single-step algorithms such as gradient descent, where the underlying operator is symmetric, thus excluding much faster momentum-based methods such as SGD with momentum or Nesterov's algorithm. Our results here seek to extend those of \citep{scieur2016regularized} to multistep methods, i.e. to accelerate accelerated methods. Our contribution here is twofold. First, we show that the accelerated convergence bounds in \citep{scieur2016regularized} can be directly extended to multistep methods when the operator describing convergence near the optimum has a particular block structure, by modifying the extrapolating sequence. This result applies in particular to Nesterov's method and the stochastic gradient algorithms with a momentum term. Second, we use Crouzeix's recent results \citep{Crou07,Crou17,Gree17} to show that, in the general non-symmetric case, acceleration performance is controlled by the solution of a Chebyshev problem on the numerical range of the linear, non-symmetric operator modelling the behavior of iterates near the optimum. We characterize the shape of this numerical range for various classical multistep algorithms such as Nesterov's method \citep{Nest83}, and Chambolle-Pock's algorithm \citep{chambolle2011first}. We then study the performance of our technique on a logistic regression problem. The online version (which modifies iterations) is competitive with L-BFGS in our experiments and significantly faster than classical accelerated algorithms. Furthermore, it is robust to miss-specified strong convexity parameters. \subsection*{Organization of the paper} In Section~\ref{s:nacc}, we describe the iteration schemes that we seek to accelerate, introduce the Regularized Nonlinear Acceleration (RNA) scheme, and show how to control its convergence rate for linear iterations (e.g. solving quadratic problems). In Section~\ref{s:crouzeix} we show how to bound the convergence rate of acceleration schemes on generic nonsymmetric iterates using Crouzeix's conjecture and bounds on the minimum of a Chebyshev problem written on the numerical range of the nonsymmetric operator. We apply these results to Nesterov's method and the Chambolle-Pock primal-dual algorithm in Section~\ref{s:algos}. We extend our results to generic nonlinear updates using a constrained formulation of RNA (called CNA) in Section~\ref{s:nonlin}. We show optimal convergence rates in the symmetric case for CNA on simple gradient descent with linear combination of previous iterates in Section~\ref{s:grad}, producing a much cleaner proof of the results in~\citep{scieur2016regularized} on RNA. In Section~\ref{s:online}, we show that RNA can be applied online, i.e. that we can extrapolate iterates produced by an extrapolation scheme at each iteration (previous results only worked in batch mode) and apply this result to speed up Nesterov's method. \oldsection{Nonlinear Acceleration}\label{s:nacc} We begin by describing the iteration template for the algorithms to which we will apply acceleration schemes. \subsection{General setting} Consider the following optimization problem \BEQ\label{eq:fprob} \min_{x\in\reals^n} f(x) \EEQ in the variable $x\in\reals^n$, where $f(x)$ is strongly convex with parameter~$\mu$ with respect to the Euclidean norm, and has a Lipschitz continuous gradient with parameter $L$ with respect to the same norm. We consider the following class of algorithms, written \BEQ \label{eq:general_iteration} \left\{\BA{l} x_{i} = g(y_{i-1})\\ y_i = \textstyle \sum_{j=1}^i \alpha_j^{(i)} x_j + \beta_j^{(i)} y_{j-1}, \EA\right. \EEQ where $x_i,y_i\in\reals^d$ and $g: \reals^d \to \reals^d$ is an iterative update, potentially stochastic. For example, $g(x)$ can be a gradient step with fixed stepsize, in which case $g(x)=x - h\nabla f(x)$. We assume the following condition on the coefficients $\alpha$ and $\beta$, to ensure consistency \citep{scieur2017integration}, \[ \textbf{1}^T(\alpha + \beta) = 1, \quad \forall k, \ \alpha_j \neq 0. \] We can write these updates in matrix format, with \BEQ\label{eq:def_xy} X_i = [x_1,x_2,\ldots, x_i], \quad Y_i = [y_0,y_1,\ldots, y_{i-1}]. \EEQ Using this notation, \eqref{eq:general_iteration} reads (assuming $x_0=y_0$) \BEA\label{eq:general_iteration_matrix} X_i = g(Y_{i-1})\,, \qquad Y_i = [x_0, X_{i}]L_i, \EEA where $g(Y)$ stands for $[g(y_0),g(y_1),\ldots,g(y_{i-1})]$ and the matrix $L_i$ is upper-triangular of size $i\times i$ with nonzero diagonal coefficients, with columns summing to one. The matrix $L_i$ can be constructed iteratively, following the recurrence \BEQ L_i = \begin{bmatrix} L_{i-1} & \alpha_{[1:i-1]} + L_{i-1}\beta \\ 0_{1\times i-1} & \alpha_i \end{bmatrix}, \quad L_0 = 1.\label{eq:recurence_L} \EEQ In short, $L_i$ gathers coefficients from the linear combination in~\eqref{eq:general_iteration}. This matrix, together with $g$, characterizes the algorithm. The iterate update form \eqref{eq:general_iteration} is generic and includes many classical algorithms such as the accelerated gradient method in \citep{nesterov2013introductory}, where \[ \begin{cases} x_{i} &= g(y_{i-1}) = y_{i-1} - \frac{1}{L} \nabla f(y_{i-1}) \\ y_{i} &= \left(1+\frac{i-1}{i+2}\right)x_{i} - \frac{i-1}{i+2}\; x_{i-1}. \end{cases} \] As in~\citep{scieur2016regularized} we will focus on improving our estimates of the solution to problem~\eqref{eq:fprob} by tracking only the sequence of iterates $(x_i, y_i)$ produced by an optimization algorithm, without any further oracle calls to $g(x)$. The main difference with the work of \citep{scieur2016regularized} is the presence of a linear combination of previous iterates in the definition of $y$ in \eqref{eq:general_iteration}, so the mapping from $x_{i}$ to $x_{i+1}$ is usually \textit{non-symmetric}. For instance, for Nesterov's algorithm, the Jacobian of $x_{i+1}$ with respect to $x_i$, $y_i$ reads \[ J_{x_{i+1}} = \begin{bmatrix} 0 & J_g\\ \left(1+\frac{i-2}{i+1}\right) \textbf{I} & - \frac{i-2}{i+1}\textbf{I} \end{bmatrix} \neq J_{x_{i+1}}^T \] where $J_{x_{i+1}}$ is the Jacobian of the function $g$ evaluated at $x_{i+1}$. In what follows, we show that looking at the residuals \BEA r(x) \triangleq g(x)-x, \quad r_i = r(y_{i-1}) = x_i-y_{i-1}, \quad R_i = [r_1\ldots r_i], \label{eq:residue} \EEA allows us to recover the convergence results from \citep{scieur2016regularized} when the Jacobian of the function $g$, written $J_g$, is symmetric. Moreover, we extend the analysis for \textit{non symmetric} Jacobians. This allows us to accelerate for instance accelerated methods or primal-dual methods. We now briefly recall the key ideas driving nonlinear acceleration schemes. \subsection{Linear Algorithms} In this section, we focus on iterative algorithms $g$ that are linear, i.e., where \BEQ g(x) = G(x-x^*) + x^*. \label{eq:linear_g} \EEQ The matrix $G$ is of size $d\times d$, and, contrary to \citep{scieur2016regularized}, we do not assume symmetry. Here, $x^*$ is a fixed point of $g$. In optimization problems where $g$ is typically a gradient mapping $x^*$ is the minimum of an objective function. Its worth mentioning that \eqref{eq:linear_g} is equivalent to $Ax+b$, thus we do not require $x^*$ to evaluate the mapping $g(x)$. We first treat the case where $g(x)$ is linear, as the nonlinear case will then be handled as a perturbation of the linear one. We introduce $\mathcal{P}_{[N]}^{(1)}$, the set of all polynomials $p$ whose degree is \textit{exactly} $N$ (i.e., the leading coefficient is nonzero), and whose coefficients sum to one. More formally, \BEQ \mathcal{P}_{[N]}^{(1)} = \{ p \in \reals[x]: \deg(p) = N,\, p(1) = 1 \}. \EEQ The following proposition extends a result by \citet{scieur2016regularized} showing that iterates in~\eqref{eq:general_iteration} can be written using polynomials in $\mathcal{P}_{[N]}^{(1)}$. This formulation is helpful to derive the rate of converge of the Nonlinear Acceleration algorithm. \begin{proposition} \label{prop:poly_iter} Let $g$ be the linear function \eqref{eq:linear_g}. Then, the $N$-th iteration of \eqref{eq:general_iteration} is equivalent to \BEA \label{eq:polynomial_iteration} x_N = x^* + G(y_{N-1}-x^*), \qquad y_N = x^* + p_N(G)(x_0-x^*),\quad \mbox{for some $p_N\in \mathcal{P}_{[N]}^{(1)}$.} \EEA \end{proposition} \begin{proof} We prove \eqref{eq:polynomial_iteration} iteratively. Of course, at iteration zero, \[ y_0 = x^* + 1 \cdot (x_0 - x^*), \] and $1$ is indeed polynomial of degree zero whose coefficient sum to one. Now, assume \[ y_{i-1} - x^* = p_{i-1}(G)(x_0-x^*), \quad p_{i-1} \in \mathcal{P}_{[i-1]}^{(1)}. \] We show that \[ y_{i} - x^* = p_{i}(G)(x_0-x^*), \quad p_{i} \in \mathcal{P}_{[i]}^{(1)}. \] By definition of $y_{i}$ in~\eqref{eq:general_iteration}, \[ y_{i} - x^* = \textstyle \sum_{j=1}^i \alpha_j^{(i)} x_j + \beta_j^{(i)} y_{i-1} - x^*, \] where $(\alpha + \beta)^T\textbf{1} = 1$. This also means that \[ y_{i} - x^* = \textstyle \sum_{j=1}^i \alpha_j^{(i)} (x_j-x^*) + \beta_j^{(i)} (y_{j-1}-x^*) . \] By definition, $x_{j} -x^* = G(y_{j-1} - x^*) $, so \[ y_{i} - x^* = \textstyle \sum_{j=1}^i \left(\alpha_j^{(i)} G + \beta_j^{(i)} I\right) (y_{j-1}-x^*) . \] By the recurrence assumption, \[ y_{i} - x^* = \textstyle \sum_{j=1}^i \left(\alpha_j^{(i)} G + \beta_j^{(i)} I\right) p_{j-1}(G) (x_0-x^*), \] which is a linear combination of polynomials, thus $y_{i} - x^* = p(G)(x_0-x^*)$. It remains to show that $p \in \mathcal{P}_{[i]}^{(1)}$. Indeed, \[ \deg (p) = \max_j \max \left\{(1+\deg (p_{j-1}(G))) 1_{\alpha_j \neq 0},\;\; \deg (p_{j-1}(G)) 1_{\beta_j \neq 0}\right\}, \] where $1_{\alpha_j \neq 0} = 1$ if $\alpha_j \neq 0$ and $0$ otherwise. By assumption, $\alpha_i \neq 0$ thus \[ \deg (p) \geq 1+\deg (p_{i-1}(G)) = i. \] Since $p$ is a linear combination of polynomials of degree at most $i$, \[ \deg (p) = i. \] It remains to show that $p(1)=1$. Indeed, \[ p(1) = \sum_{j=1}^i \left(\alpha_j^{(i)} 1 + \beta_j^{(i)} \right) p_{j-1}(1). \] Since $\left(\alpha_j^{(i)} 1 + \beta_j^{(i)} \right) = 1$ and $p_{j-1}(1) = 1$, $p(1) = 1$ and this proves the proposition. \end{proof} \subsection{Regularized Nonlinear Acceleration Scheme} We now propose a modification of RNA that can accelerate any algorithm of the form \eqref{eq:general_iteration} by combining the approaches of \cite{anderson1965iterative} and \cite{scieur2016regularized}. We introduce a mixing parameter~$\eta$, as in Anderson acceleration (which only impact the constant term in the rate of convergence). Throughout this paper, \textbf{RNA} will refer to Algorithm \ref{algo:rna} below. \begin{algorithm}[htb] \caption{Regularized Nonlinear Acceleration (\textbf{RNA})} \label{algo:rna} \begin{algorithmic}[1] \STATE {\bfseries Data:} Matrices $X$ and $Y$ of size $d\times N$ constructed from the iterates as in~\eqref{eq:general_iteration} and~\eqref{eq:def_xy}. \STATE {\bfseries Parameters:} Mixing $\eta\neq 0$, regularization $\lambda \geq 0$.\\ \hrulefill \STATE \textbf{1.} Compute matrix of residuals $R = X-Y$. \STATE \textbf{2.} Solve \BEQ c^{\lambda} = \frac{(R^TR+(\lambda\|R\|^2_2) I)^{-1} \textbf{1}_N}{\textbf{1}_N^T(R^TR+(\lambda\|R\|^2_2) I)^{-1}\textbf{1}_N}. \label{eq:cl} \EEQ \STATE \textbf{3.} Compute extrapolated solution $y^{\text{extr}} = (Y-\eta R)c^{\lambda}$. \end{algorithmic} \end{algorithm} \subsection{Computational Complexity} \citet{scieur2016regularized} discuss the complexity of Algorithm \ref{algo:rna} in the case where $N$ is small (compared to $d$). When the algorithm is used once on $X$ and $Y$ (batch acceleration), the computational complexity is $O(N^2 d)$, because we have to multiply $R^T$ and $R$. However, when Algorithm \eqref{algo:rna} accelerates iterates on-the-fly, the matrix $R^TR$ can be updated using only $O(Nd)$ operations. The complexity of solving the linear system is negligible as it takes only $O(N^3)$ operation. Even if the cubic dependence is bad for large $N$, in our experiment $N$ is typically equal to 10, thus adding a negligible computational overhead compared to the computation of a gradient in large dimension which is higher by orders. \subsection{Convergence Rate} We now analyze the convergence rate of Algorithm \ref{algo:rna} with $\lambda = 0$, which corresponds to Anderson acceleration \citep{anderson1965iterative}. In particular, we show its optimal rate of convergence when $g$ is a linear function. In the context of optimization, this is equivalent to the application of gradient descent for minimizing quadratics. Using this special structure, the iterations~\eqref{eq:polynomial_iteration} produce a sequence of polynomials and the next theorem uses this special property to bound the convergence rate. Compared to previous work in this vein \citep{scieur2016regularized,scieur2017nonlinear} where the results only apply to algorithm of the form $x_{i+1} = g(x_i)$, this theorem applies to \textit{any} algorithm of the class \eqref{eq:general_iteration} where in particular, we allow $G$ to be nonsymmetric. \begin{theorem} \label{thm:optimal_rate} Let $X$, $Y$ in~\eqref{eq:def_xy} be formed using iterates from \eqref{eq:general_iteration}. Let $g$ be defined in~\eqref{eq:linear_g}, where $G\in \mathbb{R}^{d\times d}$ does not have $1$ as eigenvalue. The norm of the residual of the extrapolated solution $y^{\text{extr}}$, written \[ r(y^{\text{extr}}) = g(y^{\text{extr}}) - y^{\text{extr}}, \] produced by Algorithm \ref{algo:rna} with $\lambda = 0$, is bounded by \[ \|r(y^{\text{extr}})\|_2 \leq \| I-\eta (G-I) \|_2 ~\| p^*_{N-1}(G)r(x_0) \|_2, \] where $p^*_{N-1}$ solves \BEQ \textstyle p^*_{N-1} = \argmin_{p\in \mathcal{P}_{[N-1]}^{(1)}} \| p(G)r(x_0) \|_2. \label{eq:minimal_polynomial} \EEQ Moreover, after at most $d$ iterations, the algorithm converges to the exact solution, satisfying $\|r(y^{\text{extr}})\|_2 = 0$. \end{theorem} \begin{proof} First, we write the definition of $y^{\text{extr}}$ from Algorithm \ref{algo:rna} when $\lambda = 0$, \[ y^{\text{extr}} -x^* = (Y-\eta R) c - x^*. \] Since $c^T\textbf{1} = 1$, we have $X^*c = x^*$, where $X^* = [x^*,x^*,\ldots, x^*,]$. Thus, \[ y^{\text{extr}} -x^* = (Y-X^*-\eta R) c. \] Since $R = G(Y-X^*)$, \[ y^{\text{extr}} - x^* = (I-\eta (G-I))(Y-X^*) c . \] We have seen that the columns of $Y-X^*$ are polynomials of different degrees, whose coefficients sums to one \eqref{eq:polynomial_iteration}. This means \[ y^{\text{extr}} - x^* = (I-\eta (G-I))\sum_{i=0}^{N-1} c_i p_i(G)(x_0-x^*). \] In addition, its residual reads \BEAS r(y^{\text{extr}}) & = & (G-I)(y^{\text{extr}} - x^*) \\ & = & (G-I)(I-\eta (G-I))\sum_{i=0}^{N-1} p_i(G)(x_0-x^*) \\ & = & (I-\eta (G-I)\sum_{i=0}^{N-1} c_i p_i(G)r(x_0). \EEAS Its norm is thus bounded by \[ \|r(y^{\text{extr}})\| \leq \|I-\eta (G-I)\| \|\underbrace{\sum_{i=0}^{N-1} c_i p_i(G)r(x_0)}_{=Rc}\|. \] By definition of $c$ from Algorithm \ref{algo:rna}, \[ \|r(y^{\text{extr}})\| \leq \|I-\eta (G-I)\| \min_{c:c^T\textbf{1} = 1} \|\sum_{i=0}^{N-1} c_i p_i(G)r(x_0)\|. \] Because $p_i$ are all of degree exactly equal to $i$, the $p_i$ are a basis of the set of all polynomial of degree at most $N-1$. In addition, because $p_i(1) = 1$, restricting the sum of coefficients $c_i$ to $1$ generates the set $\mathcal{P}_{[N-1]}^{(1)}$. We have thus \[ \|r(y^{\text{extr}})\| \leq \|I-\eta (G-I)\| \min_{p\in \mathcal{P}_{[N-1]}^{(1)} } \|p(G)r_0\|. \] Finally, when $N>d$, it suffice to take the minimal polynomial of the matrix $G$ named $p_{\min,G}$, whose coefficient are normalized by $p_{\min,G}(1)$. Since the eigenvalues of $G$ are strictly inferior to $1$, $p_{\min,G}(1)$ cannot be zero. \end{proof} In optimization, the quantity $\|r(y^{\text{extr}})\|_2$ is proportional to the norm of the gradient of the objective function computed at $y^{\text{extr}}$. This last theorem reduces the analysis of the rate of convergence of RNA to the analysis of the quantity \eqref{eq:minimal_polynomial}. In the symmetric case discussed in~\citep{scieur2016regularized}, this bound recovers the optimal rate in \citep{nesterov2013introductory} which also appears in the complexity analysis of Krylov methods (like GMRES or conjugate gradients \citep{golub1961chebyshev,golub2012matrix}) for quadratic minimization. \oldsection{Crouzeix's Conjecture \& Chebyshev Polynomials on the Numerical Range}\label{s:crouzeix} We have seen in~\eqref{eq:minimal_polynomial} from Theorem~\ref{thm:optimal_rate} that the convergence rate of nonlinear acceleration is controlled by the norm of a matrix polynomial in the operator $G$, with \[ \|r(y^{\text{extr}})\|_2 \leq \| I-\eta (G-I) \|_2 ~\| p^*_{N-1}(G)r(x_0) \|_2, \] where $r(y^{\text{extr}})=y^{\text{extr}} - g(y^{\text{extr}})$ and $p^*_{N-1}$ solves \[ p^*_{N-1} = \argmin_{p\in \mathcal{P}_{[N-1]}^{(1)}} \| p(G)r(x_0) \|_2. \] The results in~\citep{scieur2016regularized} recalled above handle the case where the operator $G$ is {\em symmetric}. Bounding $\|p(G)\|_2$ when $G$ is non-symmetric is much more difficult. Fortunately, Crouzeix's conjecture \citep{Crou04} allows us to bound $\|p(G)\|_2$ by solving a Chebyshev problem on the numerical range of $G$, in the complex plane. \begin{theorem}[\citet{Crou04}] Let $G\in\complexs^{n\times n}$, and $p(x)\in \complexs[x]$, we have \[ \|p(G)\|_2 \leq c\max_{z \in W(G)} |p(z)| \] for some absolute constant $c\geq 2$. \end{theorem}\label{th:crouzeix} Here $W(G)\subset \complexs$ is the numerical range of the matrix $G\in\reals^{n\times n}$, i.e. the range of the Rayleigh quotient \BEQ\label{eq:numrange} W(G) \triangleq \left\{ x^*Gx: \|x\|_2=1, x \in \complexs^n \right\}. \EEQ \citep{Crou07} shows $c\leq 11.08$ and Crouzeix's conjecture states that this can be further improved to $c=2$, which is tight. A more recent bound in \citep{Crou17} yields $c=1+\sqrt{2}$ and there is significant numerical evidence in support of the $c=2$ conjecture \citep{Gree17}. This conjecture has played a vital role in providing convergence results for e.g. the GMRES method \citep{saad1986gmres,choi2015roots}. Crouzeix's result allows us to turn the problem of finding uniform bounds for the norm of the matrix polynomial $\|p(G)\|_2$ to that of bounding $p(z)$ over the numerical range of $G$ in the complex plane, an arguably much simpler two-dimensional Chebyshev problem. \subsection{Numerical Range Approximations} The previous result links the convergence rate of accelerated algorithms with the optimum value of a Chebyshev problem over the numerical range of the operator $G$ and we now recall classical methods for computing the numerical range. There are no generic tractable methods for computing the exact numerical range of an operator $G$. However, efficient numerical methods approximate the numerical range based on key structural properties. The Toeplitz-Hausdorff theorem \citep{hausdorff1919wertvorrat, toeplitz1918algebraische} in particular states that the numerical range $W(G)$ is a closed convex bounded set. Therefore, it suffices to characterize points on the boundary, the convex hull then yields the numerical range. \citet{johnson1978numerical} made the following observations using the properties of the numerical range, \begin{align} \max_{z \in W(G)} Re(z) &= \max_{r \in W(H(G))} r= \lambda_{max}(H(G)) \label{eq: maxrealval}\\ W(e^{i\theta}G) &= e^{i\theta} W(G),\quad \forall \theta \in [0, 2\pi), \quad \label{eq: fieldvaluesrotation} \end{align} where $Re(z)$ is the real part of complex number $z$, $H(G)$ is the Hermitian part of $G$, i.e. $ H(G) = ({G + G^*})/{2}$ and $\lambda_{max}(H(G))$ is the maximum eigenvalue of $H(G)$. The first property implies that the line parallel to the imaginary axis is tangent to $W(G)$ at $\lambda_{max}(H(G))$. The second property can be used to determine other tangents via rotations. Using these observations \cite{johnson1978numerical} showed that the points on the boundary of the numerical range can be characterized as $ p_\theta =\{v_\theta^*Gv_\theta : \theta \in [0, 2\pi)\} $ where $v_\theta$ is the normalized eigenvector corresponding to the largest eigenvalue of the Hermitian matrix \begin{equation} H_\theta = \frac{1}{2}(e^{i\theta}G + e^{-i\theta}G^*) \end{equation} The numerical range can thus be characterized as follows. \begin{theorem} \citep{johnson1978numerical} For any $G\in\complexs^{n\times n}$, we have \[ W(G) = Co\{p_\theta : 0\leq \theta < 2\pi\} \] where $Co\{Z\}$ is the convex hull of the set $Z$. \end{theorem} Note that $p_\theta$ cannot be uniquely determined as the eigenvectors $v_\theta$ may not be unique but the convex hull above is uniquely determined. \subsection{Chebyshev Bounds \& Convergence Rate} Crouzeix's result means that bounding the convergence rate of accelerated algorithms can be achieved by bounding the optimum of the Chebyshev problem \BEQ\label{eq:cheb-C} \min_{\substack{p \in\complexs[z]\\p(1)=1}} ~~ \max_{z\in W(G)} |p(z)| \EEQ where $G\in\complexs^{n \times n}$. This problem has a trivial answer when the numerical range $W(G)$ is spherical, but the convergence rate can be significantly improved when $W(G)$ is less isotropic. \subsubsection{Exact Bounds on Ellipsoids} We can use an outer ellipsoidal approximation of $W(G)$, bounding the optimum value of the Chebyshev problem~\eqref{eq:cheb-C} by \BEQ\label{eq:ChebE} \min_{\substack{p(z)\in\complexs[x]\\p(1)=1}} ~~ \max_{z\in\mathcal{E}_r} |p(z)| \EEQ where \BEQ\label{eq:Er} \mathcal{E}_r\triangleq \{z\in\complexs:|z-1|+|z+1| \leq r+ 1/r\}. \EEQ This Chebyshev problem has an explicit solution in certain regimes. As in the real case, we will use $C_n(z)$, the Chebyshev polynomial of degree $k$. \citet{Fisc91} show the following result on the optimal solution to problem ~\eqref{eq:ChebE} on ellipsoids. \begin{theorem}\citep[Th.\,2]{Fisc91}\label{th:fish} Let $k\geq 5$, $r>1$ and $c \in \reals$. The polynomial \[ T_{k,\kappa}(z)=T_k(z)/T_k(1-\kappa) \] where \[ T_k(z)= \frac{1}{2}\left(v^k + \frac{1}{v^k}\right), \quad v =\frac{1}{2}\left( z + \frac{1}{z}\right) \] is the unique solution of problem~\eqref{eq:ChebE} if either \[ |1-\kappa| \geq \frac{1}{2}\left(r^{\sqrt{2}} + r^{-\sqrt{2}}\right) \] or \[ |1-\kappa| \geq \frac{1}{2 a_r}\left(2a_r^2 - 1 + \sqrt{2a_r^4-a_r^2+1}\right) \] where $a_r=(r+1/r)/2.$ \end{theorem} The optimal polynomial for a general ellipse $\mathcal{E}$ can be obtained by a simple change of variables. That is, the polynomial $\bar{T}_k(z)={T_k(\frac{c-z}{d})}/{T_k(\frac{c-1}{d})}$ is optimal for the problem \eqref{eq:ChebE} over any ellipse $\mathcal{E}$ with center $c$, focal distance $d$ and semi-major axis $a$. It can be easily seen that the maximum value is achieved at the point $a$ on the real axis. That is the solution to the min max problem is given by $\bar{T}_k(a)$. Figure \ref{fig:chebyshev_cont5} shows the surface of the optimal polynomial with degree~$5$ for $a=0.8, d=0.76$ and $c=0$. \begin{figure} \caption{Surface of the optimal polynomial $\bar{T}_n(z)$ with degree $5$ for $a=0.8, d=0.76$ and $c=0$. } \label{fig:chebyshev_cont5} \end{figure} Figure \ref{fig:chebyshev_ecc5} shows the solutions to the problem \eqref{eq:ChebE} with degree $5$ for various ellipses with center at origin, various eccentricity values $e = d/a$ and semi-major axis $a$. \begin{figure} \caption{Optimal value of the Chebyshev problem \eqref{eq:ChebE} for ellipses with centers at origin. Lower values of the maximum of the Chebyshev problem mean faster convergence. The higher the eccentricity here, the faster the convergence.} \label{fig:chebyshev_ecc5} \end{figure} Here, zero eccentricity corresponds to a sphere, while an eccentricity of one corresponds to a line. \oldsection{Accelerating Non-symmetric Algorithms}\label{s:algos} We have seen in the previous section that (asymptotically) controlling the convergence rate of the nonlinear acceleration scheme in Algorithm~\ref{algo:rna} for generic operators $G$ means bounding the optimal value of the Chebyshev optimization problem in~\eqref{eq:cheb-C} over the numerical range of the operator driving iterations near the optimum. In what follows, we explicitly detail this operator and approximate its numerical range for two classical algorithms, Nesterov's accelerated method \citep{Nest83} and Chambolle-Pock's Primal-Dual Algorithm \citep{chambolle2011first}. We focus on quadratic optimization below. We will see later in Section~\ref{s:nonlin} that, asymptotically at least, the behavior of acceleration on generic problems can be analyzed as a perturbation of the quadratic case. \subsection{Nesterov's Accelerated Gradient Method} The iterates formed by Nesterov's accelerated gradient descent method for minimizing smooth strongly convex functions with constant stepsize follow \begin{equation} \left\{ \begin{aligned} x_k &= y_{k-1} - \alpha \nabla f(y_{k-1}) \\ y_{k} &= x_k + \beta(x_k - x_{k-1}) \end{aligned} \right. \label{eq:nest_iterate} \end{equation} with $\beta = \frac{\sqrt{L} - \sqrt{\mu}}{\sqrt{L} + \sqrt{\mu}}$, where $L$ is the gradient's Lipschitz continuity constant and $\mu$ is the strong convexity parameter. This algorithm is better handled using the results in previous sections, and we only use it here to better illustrate our results on non-symmetric operators. \subsubsection{Nesterov's Operator in the quadratic case} When minimizing quadratic functions $f(x) = \frac{1}{2}\|Bx - b\|^2$, using constant stepsize~$1/L$, these iterations become, \[ \begin{cases} x_k - x^* &= y_{k-1} - x^* - \frac{1}{L}B^T(By_{k-1} - b) \\ y_k - x^* &= x_k - x^* + \beta(x_k - x^* - x_{k-1} + x^*). \end{cases} \] or again, \BEAS \begin{bmatrix} x_{k} - x^*\\ y_{k} - x^* \end{bmatrix} = \begin{bmatrix} 0 & A\\ -\beta I & (1 + \beta) A \end{bmatrix} \begin{bmatrix} x_{k-1} - x^*\\ y_{k-1} - x^* \end{bmatrix} \EEAS where $A = I - \frac{1}{L}B^TB$. We write $G$ the {\em non-symmetric} linear operator in these iterations, i.e. \begin{align} G = \begin{bmatrix} 0 & A\\ -\beta I & (1 + \beta) A \end{bmatrix} \end{align} The results in Section~\ref{s:nacc} show that we can accelerate the sequence $z_k = (x_{k},y_{k})$ if the solution to the minmax problem \eqref{eq:cheb-C} defined over the numerical range of the operator $G$ is bounded. \subsubsection{Numerical Range} We can compute the numerical range of the operator $G$ using the techniques described in Section \eqref{s:nacc}. In the particular case of Nesterov's accelerated gradient method, the numerical range is a convex hull of ellipsoids. We show this by considering the $2\times 2$ operators obtained by replacing the symmetric positive matrix $G$ with its eigenvalues, to form \begin{align} \label{eq:nesteigen} G_j = \begin{bmatrix} 0 & \lambda_j\\ -\beta I & (1 + \beta) \lambda_j \end{bmatrix} \quad \text{for } j \in \{1,2,\cdots,n\} \end{align} where $0<\lambda_1 \leq \lambda_2 \leq \cdots \leq \lambda_n < 1$ are the eigenvalues of the matrix $A$. We have the following result. \begin{theorem} The numerical range of operator $G$ is given as the convex hull of the numerical ranges of the operators $G_j$, i.e. $W(G) = \Co\{W(G_1),W(G_2),\cdots,W(G_n)\}$. \end{theorem} \begin{proof} Let $v_1,v_2,\cdots,v_n$ be eigen vectors associated with eigen values $\lambda_1,\lambda_2,\cdots,\lambda_n$ of the matrix $A$. We can write \begin{equation*} A = \sum_{j=0}^{n} \lambda_jv_jv_j^T \qquad I = \sum_{j=0}^{n}v_jv_j^T \end{equation*} Let $t \in W(G) \subset \complexs$. By definition of the numerical range, there exists $z \in \complexs^{2n}$ with $z^*z = 1$ and \begin{align*} t &= z^*\begin{bmatrix} 0 & A\\ -\beta I & (1 + \beta) A \end{bmatrix}z \\ &= z^*\begin{bmatrix} 0 & \sum_{j=1}^{n}\lambda_j v_j v_j^T\\ -\beta \sum_{j=1}^{n}v_j v_j^T & (1 + \beta) \sum_{j=1}^{n}\lambda_j v_j v_j^T \end{bmatrix} z \\ &= \sum_{j=0}^{n}z^*\left(\begin{bmatrix} 0 & \lambda_j \\ -\beta & (1 + \beta) \lambda_j \end{bmatrix} \otimes v_jv_j^T\right)\vect([z_1,z_2])\\ &= \sum_{j=0}^{n}z^*\vect\left( v_jv_j^T [z_1,z_2]\begin{bmatrix} 0 & \lambda_j \\ -\beta & (1 + \beta) \lambda_j \end{bmatrix}^T \right)\\ \end{align*} and since $v_jv_j^Tv_jv_j^T=v_jv_j^T$, this last term can be written \begin{align*} t &= \sum_{j=0}^{n} \Tr\left( v_jv_j^T [z_1,z_2]\begin{bmatrix} 0 & \lambda_j \\ -\beta & (1 + \beta) \lambda_j \end{bmatrix}^T [z_1,z_2]^* v_jv_j^T\right)\\ &= \sum_{j=0}^{n} \Tr(v_jv_j^T) \left( [v_j^Tz_1,v_j^Tz_2]\begin{bmatrix} 0 & \lambda_j \\ -\beta & (1 + \beta) \lambda_j \end{bmatrix}^T [z_1^*v_j,z_2^*v_j]^T \right)\\ \end{align*} Now, let $w_j=[z_1^*v_j,z_2^*v_j]^T$, and \begin{equation*} y_j = \frac{w_j^TG_jw_j}{\|w_j\|_2^2} \end{equation*} and by the definition of the numerical range, we have $y_j \in W(G_j)$. Therefore, \begin{align*} t &= \sum_{j=0}^{n}\left(\frac{w_j^TG_jw_j}{\|w_j\|_2^2}\right)\|w_j\|_2^2 \end{align*} hence \[ t \in \Co(W(G_1), W(G_2),\cdots,W(G_n)). \] We have shown that if $t \in W(G)$ then $t \in \Co(W(G_1), W(G_2),\cdots,W(G_n))$. We can show the converse by following the above steps backwards. That is, if $t \in \Co(W(G_1), W(G_2),\cdots,W(G_n))$ then we have, \begin{align*} t = \sum_{j=0}^{n} \theta_j \left(\frac{w_j^TG_jw_j}{\|w_j\|_2^2}\right) \end{align*} where $\theta_j > 0$, $\sum_{j=0}^{n}\theta_j =1$ and $w_j \in \complexs^{2}$. Now, let \begin{align*} z = \sum_{j=0}^{n}\frac{\vect(v_jw_j^T)\theta_j^{1/2}}{\|w_j\|} \end{align*} and we have, \begin{align*} t = \sum_{j=0}^{n}[z_1^*v_j z_2^*v_j]G_j\begin{bmatrix} v_j^T z_1 \\ v_j^T z_2 \end{bmatrix} \end{align*} wherein we used the fact that $v_j^Tv_k = 0$ for any $j \neq k$ and $v_j^Tv_j =1$ in computing $w_j^T = [z_1^*v_j z_2^*v_j]$. We also note that $z^*z =1$ by the definition of $z$ and rewriting the sum in the matrix form we can show that $t \in W(G)$ which completes the proof. \end{proof} To minimize the solution of the Chebyshev problem in~\eqref{eq:cheb-C} and control convergence given the normalization constraint $p(1)=1$, the point $(1,0)$ should be outside the numerical range. Because the numerical range is convex and symmetric w.r.t. the real axis (the operator $G$ is real), this means checking if the maximum real value of the numerical range is less than $1$. For $2\times 2$ matrices, the boundary of the numerical range is given by an ellipse \citep{donoghue1957}, so the numerical range of Nesterov's accelerated gradient method is the convex hull of ellipsoids. The ellipse in \citep{donoghue1957} can be determined directly from the entries of the matrix as in \cite{johnson1974computation},as follows. \begin{theorem}\citep{johnson1974computation}\label{th:jhon} For any real 2 by 2 matrix \[ \begin{bmatrix} a & b\\ c & d \end{bmatrix} \] the boundary of the numerical range is an ellipse whose axes are the line segments joining the points x to y and w to z respectively where, \begin{align*} x &= \frac{1}{2}(a + d - ((a -d)^2 + (b+c)^2)^{1/2})\\ w &= \frac{a+d}{2} - i\left|\frac{b-c}{2}\right|\\ y &= \frac{1}{2}(a + d + ((a -d)^2 + (b+c)^2)^{1/2})\\ z &= \frac{a+d}{2} + i\left|\frac{b-c}{2}\right| \end{align*} are the points in the complex plane. \end{theorem} This allows us to compute the maximum real value of $W(G)$, as the point of intersection of $W(G)$ with the real line which can be computed explicitly as, \BEAS re(G) &=& \max Re(W(G)) = \max_j \mathop{Re}(W(G_j)) \\ &=& \frac{1}{2}\left((1 + \beta)\lambda_n + \sqrt{\lambda_n^2(1+\beta)^2 + (\lambda_n - \beta)^2}\right) \EEAS where $\lambda_n = 1 - \frac{\mu}{L}$. We observe that $re(G)$ is a function of the condition number of the problem and takes values in the interval $[0, 2]$. Therefore, RNA will only work on Nesterov's accelerated gradient method when $re(G) < 1$ holds, which implies that the condition number of the problem $\kappa = \frac{L}{\mu}$ should be less than around $~2.5$ which is highly restrictive. An alternative approach is to use RNA on a sequence of iterates sampled every few iterations, which is equivalent to using powers of the operator~$G$. We expect the numerical radius of some power of operator~$G$ to be less than 1 for any conditioning of the problem. This is because the iterates are converging at an $R-$linear rate and so the norm of the power of the operator is decreasing at an $R-$linear rate with the powers. Therefore, using the property that the numerical radius is bounded by the norm of the operator we have, \begin{equation*} re(G^p) = \max Re(W(G^p)) \leq r_{G^p} \leq \|G^p\| \leq C_p \rho^p \end{equation*} where $r_{G^p}$ is the numerical radius of $G^p$. Figure \ref{fig:fieldvals_random50_nest} shows the numerical range of the powers of the operator~$G$ for a random matrix $B^TB$ with dimension $d = 50$. We observe that after some threshold value for the power~$p$, $(1,0)$ lies outside the field values corresponding to $G^p$ thus guaranteeing that the acceleration scheme will work. We also observe that the boundaries of the field values are almost circular for higher powers $p$, which is consistent with results on optimal matrices in~\citep{Lewi18}. When the numerical range is circular, the solution of the Chebyshev problem is trivially equal to $z^p$ so RNA simply picks the last iterate and does not accelerate convergence. \begin{figure} \caption{Numerical range for the linear operator in Nesterov's method, on a random quadratic problem with dimension 50. Left: Operator~$G$. Right: Various operator powers~$G^p$. The RNA scheme will improve convergence whenever the point $(1,0)$ lies outside of the numerical range of the operator.} \label{fig:fieldvals_random50_nest} \end{figure} The difficulty in performing RNA on Nesterov's accelerated gradient method arises from the fact that the iterates can be non-monotonic. The restriction that $1$ should be outside the numerical range is necessary for both non-symmetric and symmetric operators. In symmetric operators, the numerical range is a line segment on the real axis and the numerical radius and spectral radius are equal, so this restriction is equivalent to having spectral radius less than $1$, i.e. having monotonically converging iterates. \subsection{Chambolle-Pock's Primal-Dual Algorithm} Chambolle-Pock is a first-order primal-dual algorithm used for minimizing composite functions of the form \begin{equation}\label{prob:primal} \min_x h_p(x) := f(Ax) + g(x) \end{equation} where $f$ and $g$ are convex functions and $A$ is a continuous linear map. Optimization problems of this form arise in e.g. imaging applications like total variation minimization (see \cite{chambolle2016introduction}). The Fenchel dual of this problem is given by \begin{equation}\label{prob:dual} \max_y h_d(y) := - f^*(-y) - g^*(A^*y) \end{equation} where $f^*, g^*$ are the convex conjugate functions of $f, g$ respectively. These problems are primal dual formulations of the general saddle point problem, \begin{equation}\label{prob:saddle} \min_x \max_y <Ax, y> + g(x) - f^*(y), \end{equation} where $f^*, g$ are closed proper functions. \cite{chambolle2011first} designed a first-order primal-dual algorithm for solving these problems, where primal-dual iterates are given by \begin{equation}\label{eq:iter_cp} \left\{ \begin{aligned} y_{k+1} &= \mathbf{Prox}_{f^*}^\sigma(y_k + \sigma A\bar{x}_k) \\ x_{k+1} &= \mathbf{Prox}_g^\tau(x_k - \tau A^*y_{k+1}) \\ \bar{x}_{k+1} &= x_{k+1} + \theta (x_{k+1} - x_{k}) \end{aligned} \right. \end{equation} where $\sigma, \tau$ are the step length parameters, $\theta \in [0,1]$ is the momentum parameter and the proximal mapping of a function $f$ is defined as \[ \mathbf{Prox}_f^\tau(y) = \arg \min_x \left\{\|y - x\|^2/({2\tau}) + f(x)\right\} \] Note that if the proximal mapping of a function is available then the proximal mapping of the conjugate of the function can be easily computed using Moreau's identity, with \[ \mathbf{Prox}_f^\tau(y) + \mathbf{Prox}_{f^*}^{1/\tau}(y/\tau) = y \] The optimal strategy for choosing the step length parameters $\sigma, \tau$ and the momentum parameter $\theta$ depend on the smoothness and strong convexity parameters of the problem. When $f^*$ and $g$ are strongly convex with strong convexity parameters $\delta$ and $\gamma$ respectively then these parameters are chosen to be constant values given as \begin{equation}\label{eq:pdgm_params} \sigma = \frac{1}{\|A\|}\sqrt{\frac{\gamma}{\delta}} \quad \tau = \frac{1}{\|A\|}\sqrt{\frac{\delta}{\gamma}} \qquad \theta = \left(1 + \frac{2\sqrt{\gamma\delta}}{\|A\|}\right)^{-1} \end{equation} to yield the optimal linear rate of convergence. When only one of $f^*$ or $g$ is strongly convex with strong convexity parameter $\gamma$, then these parameters are chosen adaptively at each iteration as \begin{equation} \theta_{k} = (1 + 2\gamma\tau_k)^{-1/2}\quad\sigma_{k+1} ={\sigma_k}/{\theta_k} \quad \tau_{k+1} = \tau _{k}\theta_k\end{equation} to yield the optimal sublinear rate of convergence. A special case of the primal-dual algorithm with no momentum term, i.e., $\theta = 0$ in \eqref{eq:iter_cp} is also known as the Arrow-Hurwicz method (\cite{arrowhurwicz1960}). Although theoretical complexity bounds for this algorithm are worse compared to methods including a momentum term, it is observed experimentally that the performance is either on par or sometimes better, when step length parameters are chosen as above. We first consider algorithms with no momentum term and apply RNA to the primal-dual sequence $z_k = (y_k,x_k)$. We note that, as observed in the Nesterov's case, RNA can only be applied on non-symmetric operators for which the normalization constant $1$ is outside their numerical range. Therefore, the step length parameters $\tau, \sigma$ should be suitably chosen such that this condition is satisfied. \subsubsection{Chambolle-Pock's Operator in the Quadratic Case} When minimizing smooth strongly convex quadratic functions where $f(Ax) = \frac{1}{2}\|Ax - b\|^2$ and $g(x) = \frac{\mu}{2}\|x\|^2$, the proximal operators have closed form solutions. That is \[ \mathbf{Prox}_{f^*}^\sigma(y) =\frac{y - \sigma b}{1 + \sigma} \quad \mbox{and}\quad \mathbf{Prox}_{g}^\tau(x) = \frac{1}{1 + \tau\mu}. \] Iterates of the primal-dual algorithm with no momentum term can be written as, \begin{equation*} \begin{aligned} y_{k+1} &= \frac{y_k + \sigma Ax_k - \sigma b}{1 + \sigma}, \quad x_{k+1} &= \frac{x_k - \tau A^Ty_{k+1}}{1 + \tau\mu} \end{aligned} \end{equation*} Note that the optimal primal and dual solutions satisfy $y^* = Ax^* - b$ and $x^* = \frac{-1}{\mu}A^Ty$. This yields the following operator for iterations \begin{align} G = \begin{bmatrix} \frac{I}{1 + \sigma} & \frac{\sigma A}{1 + \sigma}\\ \frac{\tau A^T}{(1 + \sigma)(1 + \tau\mu)} & \frac{I}{1 + \tau\mu} - \frac{\tau\sigma A^TA}{(1 + \sigma)(1 + \tau\mu)}. \\ \end{bmatrix} \end{align} Note that $G$ is a non-symmetric operator except when $\sigma = \frac{\tau}{1 + \tau\mu}$, in which case the numerical range is a line segment on the real axis and the spectral radius is equal to the numerical radius. \subsubsection{Numerical Range} The numerical range of the operator can be computed using the techniques described in Section \ref{s:nacc}. As mentioned earlier, the point $1$ should be outside the numerical range for the Chebyshev polynomial to be bounded. Therefore, using \eqref{eq: maxrealval}, we have, $ re(G) = \max Re(W(G)) = \lambda_{max}\left(\frac{G + G^*}{2}\right) $ The step length parameters $\sigma, \tau$ should be chosen such that the above condition is satisfied. We observe empirically that there exists a range of values for the step length parameters such that $re(G) < 1$. Figure~\ref{fig:fieldvals_sonar} shows the numerical range of operator $G$ for $\sigma = 4, \tau =1/\|A^TA\|$ with two different regularization constants and Figure \ref{fig:fieldvals_sonar_contours} shows the regions for which $re(G^p)\leq 1$ (converging) for different values of $\sigma$ and~ $\tau$. \begin{figure} \caption{Field values for the Sonar dataset \citep{gorman1988analysis} with $\sigma = 4, \tau =1/\|A^TA\|$. The dataset has been scaled such that $\|A^TA\| = 1$. Left: $\mu = 10^{-3}$, right: $\mu = 10^{-1}$. The smaller numerical range on the right means faster convergence.} \label{fig:fieldvals_sonar} \end{figure} \begin{figure} \caption{Plot of the $re(G^p)=1$ frontier with degree $p=5$ for the Sonar dataset \citep{gorman1988analysis} for different values of~$\tau$ and~$\sigma$. White color represents values for which $re(G^p)\leq 1$ (converging) and black color represents values $re(G^p)>1$ (not converging). Left: $\mu = 10^{-3}$. Right: $\mu = 10^{-1}$.} \label{fig:fieldvals_sonar_contours} \end{figure} \oldsection{RNA on nonlinear iterations}\label{s:nonlin} In previous sections, we analyzed the rate of convergence of RNA on linear algorithms (or quadratic optimization problems). In practice however, the operator $g$ is not linear, but can instead be nonlinear with potentially random perturbation. In this situation, regularizing parameter ensures RNA converges \citep{scieur2016regularized}. In this section, we first introduce the CNA algorithm, a constrained version of RNA that explicitly bounds the norm of the coefficients $c$ for the linear combinations. We show its equivalence with the RNA algorithm. Then, we analyze the rate of convergence of CNA when $g$ is a linear function perturbed with arbitrary errors, whose origin can be nonlinearities and/or random noises. \subsection{Constrained Nonlinear Acceleration} We now introduce the constrained version of RNA, replacing the regularization term by the hard constraint \[ \| c \|_2 \leq \frac{1+\tau}{\sqrt{N}}. \] In this algorithm, the parameter $\tau > 0$ controls the norm of the coefficients $c$. Of course, all the previous analysis applies to CNA, as RNA with $\lambda = 0$ is exactly CNA with $\tau = \infty$. \begin{algorithm}[htb] \caption{Constrained Nonlinear Acceleration (\textbf{CNA})} \label{algo:cna} \begin{algorithmic} \STATE {\bfseries Data:} Matrices $X$ and $Y$ of size $d\times N$ constructed from the iterates as in~\eqref{eq:general_iteration} and~\eqref{eq:def_xy}. \STATE {\bfseries Parameters:} Mixing $\eta\neq 0$, constraint $\tau \geq 0$.\\ \hrulefill \STATE \textbf{1.} Compute matrix of residuals $R = X-Y$. \STATE \textbf{2.} Solve \BEQ \textstyle c^{(\tau)} = \argmin_{c:c^T\textbf{1} = 1} \|Rc\|_2 \quad \text{s.t. } \; \|c\|_2\leq {\textstyle \frac{1+\tau}{\sqrt{N}}} \label{eq:ctau} \EEQ \STATE \textbf{3.} Compute extrapolated solution $y^{\text{extr}} = (Y-\eta R)c^{(\tau)}$. \end{algorithmic} \end{algorithm} \subsection{Equivalence Between Constrained \& Regularized Nonlinear Acceleration} The parameters $\lambda$ in Algorithm \ref{algo:rna} and $\tau$ in Algorithm \ref{algo:cna} play similar roles. High values of $\lambda$ give coefficients close to simple averaging, and $\lambda = 0$ retrieves Anderson Acceleration. We have the same behavior when $\tau = 0$ or $\tau = \infty$. We can jump from one algorithm to the other using dual variables, since~\eqref{eq:cl} is the Lagrangian relaxation of the convex problem \eqref{eq:ctau}. This means that, for all values of $\tau$ there exists $\lambda = \lambda(\tau)$ that achieves $c^{\lambda} = c^{(\tau)}$. In fact, we can retrieve $\tau$ from the solution $c^{\lambda}$ by solving \[ \textstyle \frac{1+\tau}{\sqrt{N}} = \|c^{\lambda}\|_2. \] Conversely, to retrieve $\lambda$ from $c^{(\tau)}$, it suffices to solve \BEQ \left\| \frac{(R^TR+(\lambda\|R\|^2_2) I)^{-1} \textbf{1}_N}{\textbf{1}_N^T(R^TR+(\lambda\|R\|^2_2) I)^{-1}\textbf{1}_N} \right\|^2 = \frac{(1+\tau)^2}{N}, \label{eq:nonlinear_equation} \EEQ assuming the constraint in \eqref{eq:ctau} tight, otherwise $\lambda = 0$. Because the norm in \eqref{eq:nonlinear_equation} is increasing with $\lambda$, a binary search or one-dimensional Newton methods gives the solution in a few iterations. The next proposition bounds the norm of the coefficients of Algorithm~\ref{algo:rna} with an expression similar to~\eqref{eq:ctau}. \begin{proposition} The norm of $c^{\lambda}$ from \eqref{eq:cl} is bounded by \BEQ \textstyle \|c^{\lambda}\|_2 \leq \frac{1}{\sqrt{N}}\sqrt{1+\frac{1}{\lambda}} \label{eq:bound_norm_lambda} \EEQ \end{proposition} \begin{proof} See \citet{scieur2016regularized}, (Proposition 3.2). \end{proof} Having established the equivalence between constrained and regularized nonlinear acceleration, the next section discusses the rate of convergence of CNA in the presence of perturbations. \subsection{Constrained Chebyshev Polynomial} The previous results consider the special cases where $\lambda = 0$ or $\tau = 0$, which means that $\|c\|$ is unbounded. However, \citet{scieur2016regularized} show instability issues when $\|c\|$ is not controlled. Regularization is thus required in practice to make the method more robust to perturbations, even in the quadratic case (e.g., round-off errors). Unfortunately, this section will show that robustness comes at the cost of a potentially slower rate of convergence. We first introduce \textit{constrained Chebyshev polynomials} for the range of a specific matrix. Earlier work in \citep{scieur2016regularized} considered regularized Chebyshev polynomials, but using a constrained formulation significantly simplifies the convergence analysis here. This polynomial plays an important role in Section~\ref{sec:convergence_rate_cna} in the convergence analysis. \begin{definition} The {Constrained Chebyshev Polynomial} $\mathcal{T}^{\tau,G}_N(x)$ of degree $N$ solves, for $\tau \geq 0$, \BEQ \mathcal{T}^{\tau,G}_N(x) \triangleq \argmin_{p\in \mathcal{P}_{[N]}^{(1)}} \max_{x\in W(G)} p(x) \quad \text{s.t.} ~ \|p\|_2 \leq {\textstyle \frac{1+\tau}{\sqrt{1+N}}} \label{eq:constrained_cheby} \EEQ in the variable ${p\in \mathcal{P}_{[N]}^{(1)}}$, where $W(G)$ is the numerical range of $G$. We write $\mathcal{C}^{\tau,G}_N \triangleq \| \mathcal{T}^{\tau,G}_N(G) \|_2$ the norm of the polynomial $\mathcal{T}^{\tau,G}_N$ applied to the matrix $G$. \end{definition} \subsection{Convergence Rate of CNA without perturbations} \label{sec:convergence_rate_cna} The previous section introduced constrained Chebyshev polynomials, which play an essential role in our convergence results when $g$ is nonlinear and/or iterates~\eqref{eq:general_iteration} are noisy. Instead of analyzing Algorithm \ref{algo:rna} directly, we focus on its constrained counterpart, Algorithm \ref{algo:cna}. \begin{proposition} \label{prop:rate_constrained} Let $X$, $Y$ \eqref{eq:def_xy} be build using iterates from \eqref{eq:general_iteration} where $g$ is linear \eqref{eq:linear_g} does not have $1$ as eigenvalue. Then, the norm of the residual \eqref{eq:residue} of the extrapolation produced by Algorithm \ref{algo:cna} is bounded by \BEQ \|r(y^{\text{extr}})\|_2 \leq \| I-\eta (G-I) \|_2 \|r(x_0) \|_2\; \mathcal{C}^{\tau,G}_{N-1}, \EEQ where $\tau \geq 0$ and $\mathcal{C}^{\tau,G}_N$ is defined in \eqref{eq:constrained_cheby}. \end{proposition} \begin{proof} The proof is similar to the one of Theorem \ref{thm:optimal_rate}. It suffices to use the constrained Chebyshev polynomial rather than the rescaled Chebyshev polynomial from \cite{golub1961chebyshev}. \end{proof} Proposition \ref{prop:rate_constrained} with $\tau = \infty$ gives the same result than Theorem \ref{thm:optimal_rate}. However, smaller values of $\tau$ give weaker results as $\mathcal{C}^{\tau,G}_{N-1}$ increases. However, smaller values of $\tau$ also reduce the norm of coefficients $c^{(\tau)}$ \eqref{eq:ctau}, which makes the algorithm more robust to noise. Using the constrained algorithm in the context of non-perturbed linear function $g$ yields no theoretical benefit, but the bounds on the extrapolated coefficients simplify the analysis of perturbed non-linear optimization schemes as we will see below. In this section, we analyze the convergence rate of Algorithm \ref{algo:cna} for simplicity, but the results also hold for Algorithm \ref{algo:rna}. We first introduce the concept of perturbed linear iteration, then we analyze the convergence rate of RNA in this setting. \textbf{Perturbed Linear Iterations.} Consider the following perturbed scheme, \BEA \label{eq:perturbed_iteration_matrix} \tilde X_i = X^* + G(\tilde Y_{i-1}-X^*) + E_i, \qquad \tilde Y_i = [x_0,\tilde X_i] L_i, \EEA where $\tilde X_i$ and $\tilde Y_i$ are formed as in~\eqref{eq:def_xy} using the perturbed iterates $\tilde x_i$ and $\tilde y_i$, and $L_i$ is constructed using \eqref{eq:recurence_L}, and we write $E_i = [e_1,e_2,\ldots,e_i]$. For now, we do not assume anything on $e_i$ or $E_i$. This class contains many schemes such as gradient descent on nonlinear functions, stochastic gradient descent or even Nesterov's fast gradient with backtracking line search for example. The notation \eqref{eq:perturbed_iteration_matrix} makes the analysis simpler than in \citep{scieur2016regularized,scieur2017nonlinear}, as we have the explicit form of the error over time. Consider the perturbation matrix $P_i$, \BEQ P_i \triangleq \tilde R_i - R_i, \label{eq:perturbation_matrix} \EEQ Proposition \ref{prop:explicit_formula_perturbation} shows that the magnitude of the perturbations $\|P_i\|$ is proportional to the noise matrix $\|E_i\|$, i.e., $\|P_i\| = O(\|E_i\|)$. \begin{proposition} \label{prop:explicit_formula_perturbation} Let $P_i$ be defined in \eqref{eq:perturbation_matrix}, where $(\tilde X_i, \tilde Y_i)$ and $(\tilde X_i, \tilde Y_i)$ are formed respectively by \eqref{eq:general_iteration_matrix} and \eqref{eq:perturbed_iteration_matrix}. Let $\bar L_{j} = \| L_1\|_2 \| L_{2}\|_2 \ldots \|L_j \|_2$. Then, we have the following bound \[ \|P_i\| \leq 2\|E_i\| \bar L_{i} \sum_{j=1}^{i} \|G\|^j. \] \end{proposition} \begin{proof} First, we start with the definition of $R$ and $\tilde R$. Indeed, \[ \tilde R_i - R_i = \tilde X_i-X_i - (\tilde Y_{i-1} - Y_{i-1}). \] By definition, \[ \tilde X_i - X_i = G(\tilde Y_{i-1}-X^*) + X^* + E_i - G( Y_{i-1}-X^*) - X^* = G(\tilde Y_{i-1}-Y_{i-1}) + E_i \] On the other side, \[ \tilde Y_{i-1} - Y_{i-1} = [0;\tilde X_{i-1}-X_{i-1}]L_{i-1} \] We thus have \BEAS P_i & = & \tilde X_i-X_i - (\tilde Y_{i-1} - Y_{i-1}),\\ & = & G(\tilde Y_{i-1}-Y_{i-1}) + E_i - [0;\tilde X_{i-1}-X_{i-1}]L_{i-1},\\ & = & G( [0;\tilde X_{i-1}-X_{i-1}]L_{i-1}) + E_i - [0;G(\tilde Y_{i-2}-Y_{i-2}) + E_{i-1}]L_{i-1},\\ & = & G [0;P_{i-1}]L_{i-1} + E_i - [0;E_{i-1}]L_{i-1}.\\ \EEAS Finally, knowing that $\|E_i\| \geq \|E_{i-1}\|$ and $\|L_i\|\geq 1$, we expand \BEAS \|P_i\| & = & \| G \| \|P_{i-1}\|\|L_{i-1}\| + \|E_i\| + \|E_{i-1}\|\|L_{i-1}\|\\ & \leq & \| G \| \|P_{i-1}\|\|L_{i-1}\| + 2\|E_i\| \|L_{i-1}\| \EEAS to have the desired result. \end{proof} We now analyze how close the output of Algorithm \ref{algo:cna} is to $x^*$. To do so, we compare scheme \eqref{eq:perturbed_iteration_matrix} to its perturbation-free counterpart \eqref{eq:general_iteration_matrix}. Both schemes have the same starting point $x_0$ and ``fixed point''~$x^*$. It is important to note that scheme~\eqref{eq:perturbed_iteration_matrix} may not converge due to noise. The next theorem bounds the accuracy of the output of CNA. \begin{theorem} \label{thm:convergence_perturbation} Let $y^{\text{extr}}$ be the output of Algorithm \eqref{algo:cna} applied to \eqref{eq:perturbed_iteration_matrix}. Its accuracy is bounded by \BEAS \|(G-I) \left(y^{\text{extr}} - x^*\right)\| \leq \|I-\eta(G-I) \| \Big(\underbrace{ \mathcal{C}^{\tau,G}_{N-1} \|(G-I)(x_0-x^*)\|}_{\textbf{acceleration}} + \underbrace{\textstyle \frac{1+\tau}{\sqrt{N}} \big( \|P_N\| + \|E_N\|\big)}_{\textbf{stability}}\Big). \EEAS \end{theorem} \begin{proof} We start with the following expression for arbitrary coefficients $c$ that sum to one, \[ (G-I) \left((\tilde Y - \eta \tilde R)c - x^*\right). \] Since \[ \tilde R = \tilde X - \tilde Y = (G-I)(\tilde Y - X^*) + E, \] we have \[ (G-I)(\tilde Y-X^*) = (\tilde R-E). \] So, \[ (G-I) (\tilde Y-X^* - \eta \tilde R)c = (\tilde R-E)c - \eta (G-I)\tilde Rc . \] After rearranging the terms we get \BEQ (G-I) \left((\tilde Y - \eta \tilde R)c - x^*\right) = (I-\eta(G-I))\tilde Rc - E c.\label{eq:decomposition_error} \EEQ We bound \eqref{eq:decomposition_error} as follow, using coefficients from \eqref{eq:ctau}, \[ \|I-\eta(G-I)\| \|\tilde Rc^{(\tau)}\| + \|E\| \|c^{(\tau)}\|. \] Indeed, \BEAS \|\tilde Rc^{(\tau)}\|^2 & = & \min_{c: c^T \textbf{1} = 1,\; \|c\| \leq \frac{1+\tau}{\sqrt{N}}} \|\tilde Rc\|^2. \EEAS We have \BEAS \min_{c:\textbf{1}^Tc=1,\; \|c\| \leq \frac{1+\tau}{\sqrt{N}}} \|\tilde Rc\|_2, & \leq & \min_{c:\textbf{1}^Tc=1\; \|c\| \leq \frac{1+\tau}{\sqrt{N}}} \|Rc\|_2 + \|P_Rc\|_2,\\ & \leq & \left(\min_{c:\textbf{1}^Tc=1\; \|c\| \leq \frac{1+\tau}{\sqrt{N}}} \|Rc\|_2\right) + \|P_R\|_2\frac{1+\tau}{\sqrt{N}} ,\\ & \leq & \mathcal{C}^{\tau,G}_{N-1} \|r(x_0)\| + \frac{\|P_R\|(1+\tau)}{\sqrt{N}}. \EEAS This prove the desired result. \end{proof} This theorem shows that Algorithm \ref{algo:cna} balances acceleration and robustness. The result bounds the accuracy by the sum of an \textit{acceleration term} bounded using constrained Chebyshev polynomials, and a \textit{stability} term proportional to the norm of perturbations. In the next section, we consider the particular case where $g$ corresponds to a gradient step when the perturbations are Gaussian or due to non-linearities. \oldsection{Convergence Rates for CNA on Gradient Descent}\label{s:grad} We now apply our results when $g$ in \eqref{eq:general_iteration_matrix} corresponds to the gradient step \BEQ x-h\nabla f(x), \label{eq:gradient_step_g} \EEQ where $f$ is the objective function and $h$ a step size. We assume the function $f$ twice differentiable, $L$-smooth and $\mu$-strongly convex. This means \BEQ \mu I \leq \nabla^2 f(x) \leq LI. \label{eq:smooth_strong_convex} \EEQ Also, we assume $h = \frac{1}{L}$ for simplicity. Since we consider optimization of differentiable functions here, the matrix $\nabla^2 f(x^*)$ is symmetric. When we apply the gradient method \eqref{eq:gradient_step_g}, we first consider its linear approximation \BEQ g(x) = x-h\nabla^2 f(x^*) (x-x^*). \label{eq:linear_gradient_step} \EEQ with stepsize $h=1/L$. We identify the matrix $G$ in \eqref{eq:linear_g} to be \[ G = I-\frac{\nabla^2 f(x^*)}{L}. \] In this case, and because the Hessian is now symmetric, the numerical range $W(G)$ simplifies into the line segment \[ W(G) = [0,1-\kappa], \] where $\kappa = \frac{\mu}{L} < 1$ often refers to the inverse of the condition number of the matrix $\nabla^2 f(x^*)$. In the next sections, we study two different cases. First, we assume the objective quadratic, but \eqref{eq:linear_gradient_step} is corrupted by a random noise. Then, we consider a general nonlinear function $f$, with the additional assumption that its Hessian is Lipchitz-continuous. This corresponds to a nonlinear, deterministic perturbation of \eqref{eq:linear_gradient_step}, whose noise is bounded by $O(\|x-x^*\|^2)$. \subsection{Random Perturbations} We perform a gradient step on the quadratic form \[ f(x) = \frac{1}{2}(x-x^*) A (x-x^*), \;\; \mu I \preceq A \preceq L I. \] This corresponds to \eqref{eq:linear_gradient_step} with $\nabla f(x^*) = A$. However, each iteration is corrupted by $e_i$, where $e_i$ is Gaussian with variance $\sigma^2$. The next proposition is the application of Theorem \ref{thm:convergence_perturbation} to our setting. To simplify results, we consider $\eta = 1$. \begin{proposition} \label{prop:convergence_stoch_gradient} Assume we use Algorithm \eqref{algo:cna} with \mbox{$\eta = 1$} on $N$ iterates from \eqref{eq:perturbed_iteration_matrix}, where $g$ is the gradient step \eqref{eq:gradient_step_g} and $e_i$ are zero-mean independent random noise with variance bounded by $\sigma^2$. Then, \BEQ \mathbb{E}[\|\nabla f(y^{\text{extr}})\|] \leq (1-\kappa) \;\mathcal{C}^{\tau,G}_{N-1}\|\nabla f(x_0)\| + \mathcal{E} , \EEQ where \[ \mathcal{E} \leq (1-\kappa)\frac{1+\tau}{\sqrt{N}} L\sigma \sum_{j=1}^{N} (1-\kappa)^j \bar L_{j}. \] In the simple case where we accelerate the gradient descent algorithm, all $L_i = I$ and thus \[ \textstyle \mathcal{E} \leq \frac{1+\tau}{\sqrt{N}} \frac{L\sigma}{\kappa}. \] \end{proposition} \begin{proof} Since $\eta = 1$, \[ \|I-\eta(G-I)\| = \|G\| \leq 1-\kappa. \] Now, consider $\mathbb{E}[\|E\|]$. Because each $e_i$ are independents Gaussian noise with variance bounded by $\sigma$, we have, \[ \mathbb{E}[\|E\|] \leq \sqrt{\mathbb{E}[\|E\|^2]} \leq \sigma. \] Similarly, for $P$ \eqref{eq:perturbation_matrix}, we use Proposition \eqref{prop:explicit_formula_perturbation} and we have \BEAS \mathbb{E}[\|P\|] & \leq & \textstyle \mathbb{E}[\|E_i\|] \left(1+\sum_{j=1}^{i} (1-\kappa)^j \bar L_{j}\right)\\ & \leq & \textstyle \sigma \left(1+\sum_{j=1}^{i} (1-\kappa)^j \bar L_{j}\right) \EEAS Thus, $\mathcal{E}_{N}^{\kappa,\tau}$ in Theorem \ref{thm:convergence_perturbation} becomes \[ \mathcal{E}_{N}^{\kappa,\tau} \leq \textstyle \frac{\sigma(1+\tau)}{\sqrt{N}} \left(2+\sum_{j=1}^{N} (1-\kappa)^j \bar L_{j}\right) \] Finally, it suffice to see that \[ (G-I)(x-x^*)+x^* = (A/L)(x-x^*)+x^* = \frac{1}{L} \nabla f(x), \] and we get the desired result. In the special case of plain gradient method, $L_i = I$ so $\bar L_i = 1$. We then get \[ \textstyle \sum_{j=1}^{N} (1-\kappa)^j \leq \sum_{j=1}^{\infty} (1-\kappa)^j \leq \frac{1}{\kappa}. \] which is the desired result. \end{proof} This proposition also applies to gradient descent with momentum or with our online acceleration algorithm \eqref{eq:online_rna}. We can distinguish two different regimes when accelerating gradient descent with noise. One when $\sigma$ is small compared to $\|f(x_0)\|$, and one when $\sigma$ is large. In the first case, the acceleration term dominates. In this case, Algorithm \ref{algo:cna} with large $\tau$ produces output $y^{\text{extr}}$ that converges with a near-optimal rate of convergence. In the second regime where the noise dominates, $\tau$ should be close to zero. In this case, using our extrapolation method when perturbation are high naturally gives the simple averaging scheme. We can thus see Algorithm \eqref{algo:cna} as a way to interpolate optimal acceleration with averaging. \subsection{Nonlinear Perturbations} Here, we study the general case where the perturbation $e_i$ are bounded by a function of $D$, where $D$ satisfies \BEQ \| \tilde y_i - x^* \|_2 \leq D \qquad \forall i. \label{eq:def_d} \EEQ This assumption is usually met when we accelerate non-divergent algorithms. More precisely, we assume the perturbation are bounded by \BEQ \big(\|I-\eta(G-I) \| \|P_N\| + \|E\|\big) \leq \gamma\sqrt{N} D^\alpha. \label{nonlinear_perturbation} \EEQ where $\gamma$ and $\alpha$ are scalar. Since $\|P_N\| = O(\|E\|)$ by proposition \ref{prop:explicit_formula_perturbation}, we have that \BEQ \label{eq:condition_perturbation_column} \|e_i\| \leq O(D^\alpha) \Rightarrow \eqref{nonlinear_perturbation}. \EEQ We call these perturbations "nonlinear" because the error term typically corresponds to the difference between $g$ and its linearization around $x^*$. For example, the optimization of smooth non-quadratic functions with gradient descent can be described using \eqref{nonlinear_perturbation} with $\alpha = 1$ or $\alpha = 2$, as shown in Section \ref{sec:smooth_functions}. The next proposition bounds the accuracy of the extrapolation produced by Algorithm \eqref{algo:cna} in the presence of such perturbation. \begin{proposition} \label{prop:conv_nonlinear} Consider Algorithm \eqref{algo:cna} with $\eta = 1$ on $N$ iterates from \eqref{eq:perturbed_iteration_matrix}, where perturbations satisfy \eqref{eq:def_d}. Then, \BEAS \textstyle \left\|(G-I)(y^{\text{extr}}-x^*)\right\| & \leq & (1-\kappa)\Big( \mathcal{C}^{\tau,G}_{N-1}\left\|(G-I)(x_0-x^*)\right\| + \mathcal{E}\Big) \EEAS where $\mathcal{E} \leq (1+\tau)\gamma D^\alpha$. \end{proposition} \begin{proof} Combine Theorem \ref{thm:convergence_perturbation} with assumption \eqref{nonlinear_perturbation}. \end{proof} Here, $\|x_0-x^*\|$ is of the order of $D$. This bound is generic as does not consider any strong structural assumption on $g$, only that its first-order approximation error is bounded by a power of $D$. We did not even assume that scheme \eqref{eq:perturbed_iteration_matrix} converges. This explains why Proposition \ref{prop:conv_nonlinear} does not necessary give a convergent bound. Nevertheless, in the case of convergent scheme, Algorithm \ref{algo:cna} with $\tau = 0$ output the average of previous iterates, that also converge to $x^*$. However, Proposition \ref{prop:conv_nonlinear} is interesting when perturbations are small compared to $\|x_0-x^*\|$. In particular, it is possible to link $\tau$ and $D^\alpha$ so that Algorithm \ref{algo:cna} asymptotically reach an optimal rate of convergence, when $D\rightarrow 0$. \begin{proposition}\label{prop:asymptotic_optimal_rate} If $\tau = O(D^{-s})$ with $0<s<\alpha-1$, then, when $D\rightarrow 0$, Proposition \ref{prop:conv_nonlinear} becomes \BEAS \textstyle \left\|(G-I)(y^{\text{extr}}-x^*)\right\| & \leq & (1-\kappa) \left(\frac{1-\sqrt{\kappa}}{1+\sqrt{\kappa}}\right)^{N-1}\left\|(G-I)(x_0-x^*)\right\| \EEAS The same result holds with Algorithm \ref{algo:rna} if $\lambda = O(D^r)$ with $0<r<2(\alpha-1)$. \end{proposition} \begin{proof} By assumption, \[ \|x_0 - x^*\| = O(D). \] We thus have, by Proposition \ref{prop:conv_nonlinear} \BEAS \textstyle \left\|(G-I)(y^{\text{extr}}-x^*)\right\| & \leq & (1-\kappa)\Big( \mathcal{C}^{\tau,G}_{N-1}O(D) + (1+\tau)O(D^{\alpha})\Big). \EEAS $\tau$ will be a function of $D$, in particular $\tau = D^{-s}$. We want to have the following conditions, \[ \lim\limits_{D\rightarrow 0} (1+\tau(D))D^{\alpha-1} = 0, \qquad \lim\limits_{D\rightarrow 0} \tau = \inf. \] The first condition ensures that the perturbation converge faster to zero than the acceleration term. The second condition ask $\tau$ to grow as $D$ decreases, so that CNA becomes unconstrained. Since $\tau = D^{-s}$, we have to solve \[ \lim\limits_{D\rightarrow 0} D^{\alpha-1} + D^{\alpha-s-1} = 0, \qquad \lim\limits_{D\rightarrow 0} D^{-s} = \inf. \] Clearly, $0 < s < \alpha-1$ satisfies the two conditions. After taking the limit, we obtain \[ \textstyle \left\|(G-I)(y^{\text{extr}}-x^*)\right\| \leq (1-\kappa) \mathcal{C}^{\tau,G}_{N-1} \|(G-I)(x_0-x^*)\| \] Since $W(G)$ is the real line segment $[0,1-\kappa]$, and because $\tau \rightarrow \infty$, we end with an unconstrained minimax polynomial. Therefore, we can use the result from \citet{golub1961chebyshev}, \[ \min_{p\in \mathcal{P}_{[N]}^{(1)}}\max_{\lambda \in [0,1-\kappa]} |p(\lambda)| \leq \left(\frac{1-\sqrt{\kappa}}{1+\sqrt{\kappa}}\right)^{N-1}. \] For the second result, by using \eqref{eq:bound_norm_lambda}, \[ \|c^{\lambda}\|_2 \leq \frac{1}{\sqrt{N}}\sqrt{1+\frac{1}{\lambda}}. \] Setting \[ \frac{1+\tau}{\sqrt{N}} = \frac{1}{\sqrt{N}}\sqrt{1+\frac{1}{\lambda}} \] with $\tau = D^{-s}$ gives the conditions. \end{proof} This proposition shows that, when perturbations are of the order of $D^\alpha$ with $\alpha > 1$, then our extrapolation algorithm converges optimally once the $\tilde y_i$ are close to the solution $x^*$. The next section shows this holds, for example, when minimizing function with smooth gradients. \subsection{Optimization of Smooth Functions} \label{sec:smooth_functions} Let the objective function $f$ be a nonlinear function that follows \eqref{eq:smooth_strong_convex}, which also has a Lipchitz-continuous Hessian with constant $M$, \BEQ \label{eq:smooth_gradient} \|\nabla^2 f(y)-\nabla^2 f(x)\| \leq M\|y-x\|. \EEQ This assumption is common in the convergence analysis of second-order methods. For the convergence analysis, we consider that $g(x)$ perform a gradient step on the quadratic function \BEQ \frac{1}{2}(x-x^*)\nabla^2 f(x^*)(x-x^*). \label{eq:gradient_approx} \EEQ This is the quadratic approximation of $f$ around $x^*$. The gradient step thus reads, if we set $h=1/L$, \BEQ g(x) = \left(I-\frac{\nabla^2 f(x^*)}{L}\right)(x-x^*)+x^*. \label{eq:gradient_step_linearized} \EEQ The perturbed scheme corresponds to the application of \eqref{eq:gradient_step_linearized} with a specific nonlinear perturbation, \BEQ \textstyle \tilde x_{i+1} = g(\tilde y_i) - \underbrace{ \textstyle \frac{1}{L}(\nabla f(\tilde y_i)-\nabla^2 f(x^*)(\tilde y_i-x^*))}_{=e_i}. \label{eq:nonlinear_perturbation} \EEQ This way, we recover the gradient step on the non-quadratic function $f$. The next Proposition shows that schemes \eqref{eq:nonlinear_perturbation} satisfies \eqref{nonlinear_perturbation} with $\alpha = 1$ when $D$ is big, or $\alpha=2$ when $D$ is small. \begin{proposition}\label{eq:bound_function_smooth_gradient} Consider the scheme \eqref{eq:nonlinear_perturbation}, where $f$ satisfies \eqref{eq:smooth_gradient}. If $\|y_i-x^*\| \leq D$, then \eqref{eq:condition_perturbation_column} holds with $\alpha = 1$ for large $D$ or $\alpha = 2$ for small $D$, i.e., \[ \|e_i\| = \|\frac{1}{L}(\nabla f(\tilde y_i)-\nabla^2 f(x^*)(\tilde y_i-x^*))\| \leq \min\{ \|y_i-x^*\| ,\;\; \frac{M}{2L} \|y_i-x ^*\|^2\} \leq \min\{ D ,\;\; \frac{M}{2L} D^2\}. \] \end{proposition} \begin{proof} The proof of this statement can be found in \citet{nesterov2006cubic}. \end{proof} The combination of Proposition \ref{prop:asymptotic_optimal_rate} with Proposition \ref{eq:bound_function_smooth_gradient} means that RNA (or CNA) converges asymptotically when $\lambda$ (or $\tau$) are set properly. In other words, if $\lambda$ decreases a little bit faster than the perturbations, the extrapolation on the perturbed iterations behave as if it was accelerating a perturbation-free scheme. Our result improves that in \cite{scieur2016regularized,scieur2017nonlinear}, where $r\in]0,\frac{2(\alpha-1)}{3}[$. \oldsection{Online Acceleration} \label{s:online} We now discuss the convergence of online acceleration, i.e. coupling iterates in $g$ with the extrapolation Algorithm~\ref{algo:rna} at each iteration when $\lambda = 0$. The iterates are now given by \BEA \label{eq:online_rna} x_{N} = g(y_{N-1}),\qquad y_N = \textbf{RNA}(X,Y,\lambda,\eta), \EEA where $\textbf{RNA}(X,Y,\lambda,\eta)=y^{\text{extr}}$ with $y^{\text{extr}}$ the output of Algorithm~\ref{algo:rna}. By construction, $y^{\text{extr}}$ is written \[ \textstyle y^{\text{extr}} = \sum_{i=1}^N c^{\lambda}_i (y_{i-1} - \eta (x_i-y_{i-1})). \] If $c^{\lambda}_N\neq 0$ then $y^{\text{extr}}$ matches \eqref{eq:general_iteration}, thus online acceleration iterates in~\eqref{eq:online_rna} belong to the class of algorithms in~\eqref{eq:general_iteration}. If we can ensure $c^{\lambda}_N \neq 0$, applying Theorem~\ref{thm:optimal_rate} recursively will then show an optimal rate of convergence for online acceleration iterations in~\eqref{eq:online_rna}. We do this for linear iterations in what follows. \subsection{Linear Iterations} The next proposition shows that either $c^{\lambda}_N \neq 0$ holds, or otherwise $y^{\text{extr}} = x^*$ in the linear case. \begin{proposition} \label{prop:online_accel_structure} Let $X$, $Y$ \eqref{eq:def_xy} be built using iterates from \eqref{eq:general_iteration}. Let $g$ be defined in \eqref{eq:linear_g}, where the eigenvalues of $G$ are different from one. Consider $y^{\text{extr}}$ the output of Algorithm \ref{algo:rna} with $\lambda = 0$ and $\eta \neq 0$. If $R = X-Y$ is full column rank, then $c^{\lambda}_N \neq 0$. Otherwise, $y^{\text{extr}} = x^*$. \end{proposition} \begin{proof} Since, by definition, $\textbf{1}^T c^{\lambda}=1$, it suffices to prove that the last coefficient $c^{\lambda}_N \neq 0$. For simplicity, in the scope of this proof we write $c=c^{\lambda}$ We prove it by contradiction. Let $R_-$ be the matrix $R$ without its last column, and $c_-$ be the coefficients computed by RNA using $R_-$. Assume $c_N = 0$. In this case, \[ c = [c_-;\; 0] \qquad \text{and} \qquad Rc = R_- c_-. \] This also means that, using the explicit formula for $c$ in \eqref{eq:cl}, \[ \frac{(R^TR)^{-1}\textbf{1}}{\textbf{1}(R^TR)^{-1}\textbf{1}} = \left[ \frac{(R_-^TR_-)^{-1}\textbf{1}}{\textbf{1}(R_-^TR_-)^{-1}\textbf{1}};\; 0\right], \qquad \Leftrightarrow \qquad (R^TR)^{-1}\textbf{1} = \left[ (R_-^TR_-)^{-1}\textbf{1};\; 0\right]. \] The equivalence is obtained because \[ \textbf{1}(R^TR)^{-1}\textbf{1} = \textbf{1}^T c = \textbf{1}^T c_- = \textbf{1}(R_-^TR_-)^{-1}\textbf{1}. \] We can write $c$ and $c_-$ under the form of a linear system, \[ R^TRc = \alpha \textbf{1}_N, \quad (R_-^TR_-)c_- = \alpha \textbf{1}_{N-1}, \] where $\alpha = \textbf{1}(R^TR)^{-1}\textbf{1} = \textbf{1}(R_-^TR_-)^{-1}\textbf{1}$, which is nonzero. We augment the system with $c_-$ by concatenating zeros, \[ R^TRc = \alpha \textbf{1}_N, \quad \begin{bmatrix} (R_-^TR_-) & 0_{N-1 \times 1} \\ 0_{1 \times N-1} & 0 \end{bmatrix} \begin{bmatrix} c_- \\ 0 \end{bmatrix} = \alpha \begin{bmatrix} \textbf{1}_{N-1}\\ 0 \end{bmatrix} \] Let $r_+$ be the residual at iteration $N$. This means $R = [R_-,r_+]$. We substract the two linear systems, \[ \begin{bmatrix} 0 & R^Tr_+ \\ r_+^TR & r_+^Tr_+ \end{bmatrix} \begin{bmatrix} c_- \\ 0 \end{bmatrix} = \begin{bmatrix} 0\\ \alpha \neq 0 \end{bmatrix} \] The $N-1$ first equations tells us that either $(R^Tr_+)_i$ or $c_{-,i}$ are equal to zero. This implies \[ (R^Tr_+)^Tc = \sum_{i=1}^{N-1}(R^Tr_+)^T_ic_i=0. \] However, the last equation reads \[ (R^Tr_+)^Tc + 0\cdot r_+^Tr_+ \neq 0. \] This is a contradiction, since \[ (R^Tr_+)^Tc + 0\cdot r_+^Tr_+ = 0. \] Now, assume $R$ is not full rank. This means there exist a non-zero linear combination such that \[ Rc = 0. \] However, due to its structure $R$ is a Krylov basis of the Krylov subspace \[ \mathcal{K}_N = \text{span}[r_0,Gr_0,\ldots, G^{N}] \] If the rank of $R$ is strictly less $N$ (says $N-1$), this means \[ \mathcal{K}_N = \mathcal{K}_{N-1}. \] Due to properties of the Krylov subspace, this means that \[ r_0 = \sum_{i=1}^{N-1} \alpha_i \lambda_i v_i \] where $\lambda_i$ are distinct eigenvalues of $G$, and $v_i$ the associated eigenvector. Thus, it suffices to take the polynomial $p^*$ that interpolates the $N-1$ distinct $\lambda_i$. In this case, \[ p^*(G)r_0 = 0. \] Since $p(1)\neq 0$ because $\lambda_i \leq 1-\kappa < 1$, we have \[ \min \|Rc\| = \min_{p\in \mathcal{P}_{[N-1]}^{(1)}} \|p(G)r_0\| = \frac{p^*(G)}{p(1)}r_0 = 0. \] which is the desired result. \end{proof} This shows that we can use \textit{RNA to accelerate iterates coming from RNA}. In numerical experiments, we will see that this new approach significantly improves empirical performance. \subsection{RNA \& Nesterov's Method} We now briefly discuss a strategy that combines Nesterov's acceleration with RNA. This means using RNA instead of the classical momentum term in Nesterov's original algorithm. Using RNA, we can produce iterates that are asymptotically adaptive to the problem constants, while ensuring an optimal upper bound if one provides constants $L$ and $\mu$. We show below how to design a condition that decides after each gradient steps if we should combine previous iterates using RNA or Nesterov coefficients. Nesterov's algorithm first performs a gradient step, then combines the two previous iterates. A more generic version with a basic line search reads \BEQ \begin{cases} \text{Find } x_{i+1} : f(x_{i+1}) \leq f(y_i) - \frac{1}{2L}\|f(y_i)\|_2^2\\ \textstyle y_{i+1} = (1+\beta) x_{i+1} - \beta x_{i}, \quad \beta = \frac{1-\sqrt{\kappa}}{1+\sqrt{\kappa}}\,. \end{cases} \label{eq:general_nesterov_step} \EEQ The first condition is automatically met when we perform the gradient step $x_{i+1} = x_i - \nabla f(x_i)/L$. Based on this, we propose the following algorithm. \begin{algorithm}[htb] \caption{Optimal Adaptive Algorithm} \label{algo:optimal_adaptive} \begin{algorithmic} \STATE Compute gradient step $x_{i+1} = y_{i} - \frac{1}{L} \nabla f(y_i)$. \STATE Compute $y^{\text{extr}} = \textbf{RNA}(X,Y,\lambda,\eta)$. \STATE Let \[ z = \frac{y^{\text{extr}} + \beta x_i}{1+\beta} \] \STATE Choose the next iterate, such that \[ y_{i+1} = \begin{cases} y^{\text{extr}} \quad \text{If}\;\; f(z) \leq f(x_i) - \frac{1}{2L}\|f(x_i)\|_2^2,\\ (1+\eta) x_i - \eta x_{i-1}\quad \text{Otherwise}. \end{cases} \] \end{algorithmic} \end{algorithm} Algorithm \ref{algo:optimal_adaptive} has an optimal rate of convergence, i.e., it preserves the worst case rate of the original Nesterov algorithm. The proof is straightforward: if we do not satisfy the condition, then we perform a standard Nesterov step ; otherwise, we pick $z$ instead of the gradient step, and we combine \[ y_{i+1} = (1+\eta) z - \eta x_{i-1} = y^{\text{extr}}. \] By construction this satisfies~\eqref{eq:general_nesterov_step}, and inherits its properties, like an optimal rate of convergence. \oldsection{Numerical Results}\label{s:numres} We now study the performance of our techniques on $\ell_2$-regularized logistic regression using acceleration on Nesterov's accelerated method\footnote{The source code for the numerical experiments can be found on GitHub at \url{https://github.com/windows7lover/RegularizedNonlinearAcceleration}}. We solve a classical regression problem on the Madelon-UCI dataset \citep{guyon2003design} using the logistic loss with $\ell_2$ regularization. The regularization has been set such that the condition number of the function is equal to $10^{6}$. We compare to standard algorithms such as the simple gradient scheme, Nesterov's method for smooth and strongly convex objectives \citep{nesterov2013introductory} and L-BFGS. For the step length parameter, we used a backtracking line-search strategy. We compare these methods with their offline RNA accelerated counterparts, as well as with the online version of RNA described in~\eqref{eq:online_rna}. \begin{figure} \caption{Logistic loss on the Madelon \citep{guyon2003design}. Comparison between offline (\textit{left}) and online (\textit{right}) strategies for RNA on gradient and Nesterov's method. We use $\ell$-BFGS (with $\ell=100$ gradients stored in memory) as baseline. Clearly, one step of acceleration improves the accuracy. The performance of online RNA, which applies the extrapolation at \textit{each} step, is similar to that of L-BFGS methods, though RNA does not use line-search and requires 10 times less memory.} \label{fig:madelon} \end{figure} We observe in Figure \ref{fig:madelon} that offline RNA improves the convergence speed of gradient descent and Nesterov's method. However, the improvement is only a constant factor: the curves are shifted but have the same slope. Meanwhile, the online version greatly improves the rate of convergence, transforming the basic gradient method into an optimal algorithm competitive with line-search L-BFGS. In contrast to most quasi-Newton methods (such as L-BFGS), RNA does \textit{not} require a Wolfe line-search to be convergent. This is because the algorithm is stabilized with a Tikhonov regularization. In addition, the regularization in a way controls the impact of the noise in the iterates, making the RNA algorithm suitable for stochastic iterations \citep{scieur2017nonlinear}. We also tested the performance of online RNA on general non-symmetric algorithm, Primal-Dual Gradient Method (PDGM) \citep{chambolle2011first} defined in \eqref{eq:iter_cp} with $\theta=0$. We observe in Figure \ref{fig:madelon_nonsym_logistic1} that RNA has substantially improved the performance of the base algorithm. \begin{figure} \caption{Logistic loss on the Madelon \citep{guyon2003design}. Left : $\ell_2$ regularization parameter $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of online RNA on primal-dual gradient methods with other first-order algorithms.} \label{fig:madelon_nonsym_logistic1} \end{figure} We now study the performance of our techniques on several classical applications: image processing problems using extrapolation on Chambolle-Pock's algorithm and $\ell_2$-regularized logistic regression using acceleration on Nesterov's accelerated method\footnote{The source code for the numerical experiments can be found on GitHub at \url{https://github.com/windows7lover/RegularizedNonlinearAcceleration}}. We also study acceleration on stochastic gradient algorithms with momentum terms used for training neural networks. \subsection{Accelerating Algorithms with Momentum Terms} The following numerical experiments seek to highlight the benefits of RNA in its offline and online versions when applied to the gradient method (with or without momentum term). Since the complexity grows quadratically with the number $N$ of points in the sequences $\{x_i\}$ and $\{y_i\}$, we will use RNA with a fixed window size ($N=5$ for stochastic and $N=10$ for convex problems) and regularization parameter $\lambda=10^{-8}\|R^TR\|_2$ in all these experiments. These values are sufficiently large to show a significant improvement in the rate of convergence, but can of course be fine-tuned. \FloatBarrier \subsubsection{Logistic Regression} We solve a classical regression problem on the Madelon-UCI dataset \citep{guyon2003design} using the logistic loss with $\ell_2$ regularization. The regularization has been set such that the condition number of the function is equal to $10^{6}$. We compare to standard algorithms such as the simple gradient scheme, Nesterov's method for smooth and strongly convex objectives \citep{nesterov2013introductory} and L-BFGS. For the step length parameter, we used a backtracking line-search strategy. We compare these methods with their offline RNA accelerated counterparts, as well as with the online version of RNA described in~\eqref{eq:algo_rna}. \begin{figure} \caption{Logistic loss on the Madelon \citep{guyon2003design}. Comparison between offline (\textit{left}) and online (\textit{right}) strategies for RNA on gradient and Nesterov's method. We use $\ell$-BFGS (with $\ell=100$ gradients stored in memory) as baseline. Clearly, one step of acceleration improves the accuracy. The performance of online RNA, which applies the extrapolation at \textit{each} step, is similar to that of L-BFGS methods, though RNA does not use line-search and requires 10 times less memory.} \label{fig:madelon} \end{figure} We observe in Figure \ref{fig:madelon} that offline RNA improves the convergence speed of gradient descent and Nesterov's method. However, the improvement is only a constant factor: the curves are shifted but have the same slope. Meanwhile, the online version greatly improves the rate of convergence, transforming the basic gradient method into an optimal algorithm competitive with line-search L-BFGS. In contrast to most quasi-Newton methods (such as L-BFGS), RNA does \textit{not} require a Wolfe line-search to be convergent. This is because the algorithm is stabilized with a Tikhonov regularization. In addition, the regularization in a way controls the impact of the noise in the iterates, making the RNA algorithm suitable for stochastic iterations \citep{scieur2017nonlinear}. \FloatBarrier \begin{comment} \subsubsection{Training CNNs for image classification.} Because one stochastic iteration is not informative due to the noise, we refer to $x_k$ as the model parameters (including batch normalization statistics) corresponding to the final iteration of the epoch $k$. In this case, we do not have an explicit access to ``$(x_k-y_{k-1})$'', so we will estimate it during the stochastic steps. Let $y_{k}^{(t)}$ be the parameters of the network at epoch $k$ after $t$ stochastic iterations, and $x_k^{(t+1)}$ be the parameters after one stochastic gradient step. Then, for a data set of size $D$, \[ x_{k}-y_{k-1} \approx \frac{1}{D} \sum_{t=1}^D (x_k^{(t+1)}-y_k^{(t)}) = -h\frac{1}{D} \sum_{t=1}^D \nabla f(y_{k}^{(t)}). \] This means the matrix $R$ in Algorithm \ref{algo:block-rna} will be the matrix of (estimated) gradients. Because the learning curve is highly dependent on the learning rate schedule, we decided to use a linearly decaying learning rate to better illustrate the benefits of acceleration, even if acceleration also works with a constant learning rate schedule (see \citep{scieur2018nonlinear} and Figure \ref{fig:proto}). In all our experiments, until epoch~$T$, the learning rate decreases linearly from an initial value $h_0$ to a final value $h_T$, with \BEQ h_k = h_0+(k/T)(h_T-h_0). \label{eq:lr_gen} \EEQ We then continue the optimization during $10$ additional epochs using $h_T$ to stabilize the curve. We summarize the parameters used for the optimization in Table \ref{tab:parameters}. { \begin{table}[h!t] \renewcommand{1.2}{1.2} \centering \begin{tabular}{l|ccc} & $h_0$ & $h_T$ & momentum \\ \hline SGD and Online RNA \eqref{eq:algo_rna} & 1.0 & 0.01 & 0 \\ SGD + momentum & 0.1 & 0.001 & 0.9\\ \end{tabular} \caption{Parameters used in \eqref{eq:lr_gen} to generate the learning rate for optimizers. We used the same setting for their accelerated version with RNA.} \label{tab:parameters} \end{table} } \FloatBarrier \subsubsection{CIFAR10} CIFAR-10 is a standard 10-class image dataset comprising $5 \cdot 10^4$ training samples and $10^4$ samples for testing. Except for the linear learning rate schedule above, we follow the standard practice for CIFAR-10. We applied the standard augmentation via padding of $4$ pixels. We trained the networks VGG19, ResNet-18 and DenseNet121 during $100$ epochs ($T = 90$) with a weight decay of $5\cdot10^{-4}$. We observe in Figure~\ref{fig:cifar10_online} that the online version does not perform as well as in the convex case. More surprisingly, it is outperformed by its offline version (Figure~\ref{fig:cifar10_offline}) which computes the iterates on the side. In fact, the offline experiments detailed in~Figure~\ref{fig:cifar10_offline} exhibit much more significant gains. It produces a similar test accuracy, and the offline version converges faster than SGD, especially for early iterations. We reported speedup factors to reach a certain tolerance in Table \ref{tab:speedup}. This suggests that the offline version of RNA is a good candidate for training neural networks, as it converges faster while guaranteeing performance \textit{at least} as good as the reference algorithm. \begin{figure} \caption{Prototyping networks: acceleration (bottom curves) gives a smoother convergence, producing a clearer ranking of architectures, much earlier (we use a flat learning rate). The right plot zooms on left one.} \label{fig:proto} \end{figure} \begin{figure} \caption{(Top to bottom) VGG, Resnet-18 and Densenet networks on Cifar10, 100 epochs. SGD with and without momentum, and their off-line accelerated versions with a window size $5$. Left: training loss. Right: top-1 validation error.} \label{fig:cifar10_offline} \end{figure} \begin{figure} \caption{Online RNA for training a Resnet-18 on CIFAR-10.} \label{fig:cifar10_online} \end{figure} \begin{table}[h!t] \centering \renewcommand{1.2}{1.2} \begin{tabular}{c|cccc} Tolerance & SGD & SGD+momentum & SGD+RNA & SGD+momentum+RNA \\ \hline 5.0\% & 68 (0.87$\times$) & 59 & 21 (2.81$\times$) & \textbf{16} (3.69$\times$) \\ 2.0\% & 78 (0.99$\times$) & 77 & 47 (1.64$\times$) & \textbf{40} (1.93$\times$) \\ 1.0\% & 82 (1.00$\times$) & 82 & 67 (1.22$\times$) & \textbf{59} (1.39$\times$) \\ 0.5\% & 84 (1.02$\times$) & 86 & 75 (1.15$\times$) & \textbf{63} (1.37$\times$) \\ 0.2\% & 86 (1.13$\times$) & 97 & \textbf{84} (1.15$\times$) & 85 (1.14$\times$) \\ \end{tabular} \begin{tabular}{c|cccc} Tolerance & SGD & SGD+momentum & SGD+RNA & SGD+momentum+RNA \\ \hline 5.0\% & 69 ($0.87\times$) & 60 & 26 ($2.31\times$) & \textbf{24} ($2.50\times$) \\ 2.0\% & 83 ($0.99\times$) & 82 & 52 ($1.58\times$) & \textbf{45} ($1.82\times$) \\ 1.0\% & 84 ($1.02\times$) & 86 & 71 ($1.21\times$) & \textbf{60} ($1.43\times$) \\ 0.5\% & 89 ($0.98\times$) & 87 & 73 ($1.19\times$) & \textbf{62} ($1.40\times$) \\ 0.2\% & \textbf{N/A} & 90 & 99 ($0.90\times$) & \textbf{63} ($1.43\times$) \\ \end{tabular} \begin{tabular}{c|cccc} Tolerance & SGD & SGD+momentum & SGD+RNA & SGD+momentum+RNA \\ \hline 5.0\% & 65 (0.86$\times$) & 56 & 22 (2.55$\times$) & \textbf{13} (4.31$\times$) \\ 2.0\% & 80 (0.98$\times$) & 78 & 45 (1.73$\times$) & \textbf{38} (2.05$\times$) \\ 1.0\% & 83 (1.00$\times$) & 83 & 60 (1.38$\times$) & \textbf{56} (1.48$\times$) \\ 0.5\% & 87 (0.99$\times$) & 86 & 80 (1.08$\times$) & \textbf{66} (1.30$\times$) \\ 0.2\% & 92 (1.01$\times$) & 93 & 86 (1.08$\times$) & \textbf{75} (1.24$\times$) \\ \end{tabular} \caption{Number of epochs required to reach the best test accuracy +$\text{\textit{ Tolerance}}\%$ on CIFAR10 with a (\textit{top} to \textit{bottom}) VGG, Resnet18 and Densenet, using several algorithms. The best accuracies are $6.54\%$ (VGG), $5.0\%$ (Resnet-18) and $4.62\%$(Densenet). The speed-up compared to the SGD+momentum baseline is in parenthesis.} \label{tab:speedup} \end{table} \subsubsection{ImageNet} Here, we apply the RNA algorithm to the standard ImageNet dataset. We trained the networks during $90$ epochs ($T = 80$) with a weight decay of $10^{-4}$. We reported the test accuracy on Figure \ref{fig:imnet} for the networks ResNet-50 and ResNet-152. We only tested the offline version of RNA here, because in previous experiments it gives better result than its online counterpart. We again observe that the offline version of Algorithm \ref{algo:block-rna} improves the convergence speed of SGD with and without momentum. In addition, we show a substantial improvement of the accuracy over the non-accelerated baseline. The improvement in the accuracy is reported in Table~\ref{tab:accuracy}. Interestingly, the resulting training loss is smoother than its non accelerated counterpart, which indicates a noise reduction. \begin{figure} \caption{Training a Resnet-52 (\textit{left}) and ResNet-152 (\textit{right}) on validation ImageNet for 90 epochs using SGD with and without momentum, and their off-line accelerated versions.} \label{fig:imnet} \end{figure} \begin{table}[ht] \centering \renewcommand{1.2}{1.2} \begin{tabular}{l|ccccc} ~& Pytorch & SGD & SGD+mom. & SGD+RNA & SGD+mom.+RNA \\ \hline Resnet-50 & 23.85 & 23.808 & 23.346 & 23.412 (-0.396\%) & \textbf{22.914} (-0.432\%) \\ Resnet-152 & 21.69 & N/A & 21.294 & N/A & \textbf{20.884} (-0.410\%) \\ \end{tabular} \caption{Best validation top-1 error percentage on ImageNet. In parenthesis the improvement due to RNA. The first column corresponds to the performance of Pytorch pre-trained models.} \label{tab:accuracy} \end{table} \end{comment} \subsection{Algorithms with Non-symmetric Operators} We conducted numerical experiments to illustrate the performance of RNA on non-symmetric algorithms. We consider two different classes of problems: smooth strongly convex problems and non-smooth convex problems. \subsubsection{Smooth Problems} We consider ridge regression and $l_2$ regularized logistic regression problems which are of the form, \begin{equation*} h(x) : = f(Ax) + g(x) \end{equation*} where $f(Ax) = \frac{1}{2}\|Ax - b\|^2$ for ridge regression and $f(Ax) = \sum \log(1 + \exp(- a_i^Txb_i))$ for logistic regression, and $g(x) = \frac{\mu}{2}\|x\|^2$. The following methods are tested in this experiment. \begin{itemize} \item \textbf{GD.} The gradient descent method $x_{k+1} = x_k - \frac{1}{L}\nabla h(x_k)$, where $L$ is the Lipschitz constant of the gradient. \item \textbf{Nesterov.} The Nesterov's accelerated gradient method \citep{nesterov2013introductory} \begin{equation*} \begin{aligned} x_{k+1} &= y_k - \frac{1}{L}\nabla h(y_k) \qquad y_{k+1} &= y_k + \beta(y_k - y_{k-1}) \end{aligned} \end{equation*} where $\beta = \frac{\sqrt{L} - \sqrt{\mu}}{\sqrt{L} + \sqrt{\mu}}$, $L$ is the Lipschitz constant of the gradient. \item \textbf{L-BFGS.} The L-BFGS method \citep{liu1989limited} $x_{k+1} = x_k - \alpha_kH_k\nabla h(x_k)$ where the steplength parameter $\alpha_k$ is chosen via Armijo backtracking line search and the memory parameter is chosen to be $10$. \item \textbf{PDGM.} The primal-dual gradient method \citep{chambolle2011first,arrowhurwicz1960} \begin{equation*} \begin{aligned} y_{k+1} &= Prox_{f^*}^{\sigma}(y_k + \sigma Ax_k) \qquad x_{k+1} &= Prox_g^{\tau}(x_k - \tau A^*y_{k+1}) \end{aligned} \end{equation*} where $\sigma = \frac{1}{\|A\|}\sqrt{\frac{\mu}{\delta}}$, $\tau = \frac{1}{\|A\|}\sqrt{\frac{\delta}{\mu}}$, $\delta$ is the strong convexity parameters of $f^*$. \item \textbf{PDGM + Momentum.} The primal-dual gradient method with momentum \citep{chambolle2011first} \begin{equation*} \begin{aligned} y_{k+1} = Prox_{f^*}^{\sigma}(y_k + \sigma A\bar{x}_k)& \quad x_{k+1} = Prox_g^{\tau}(x_k - \tau A^*y_{k+1}) \\ \bar{x}_{k+1} &= x_{k+1} + \theta (x_{k+1} - x_{k}) \end{aligned} \end{equation*} where $\sigma = \frac{1}{\|A\|}\sqrt{\frac{\mu}{\delta}}$, $\tau = \frac{1}{\|A\|}\sqrt{\frac{\delta}{\mu}}$, $\theta = \frac{1}{1 + \frac{2\sqrt{\mu\delta}}{\|A\|}}$, $\delta$ is the strong convexity parameters of $f^*$. \end{itemize} The Lipschitz constant $L$ is $\|A\|^2 + \mu$ for ridge regression and is $\frac{\|A\|^2}{4} + \mu$ for logistic regression. The strong convexity parameter $\delta$ of the dual function $f^*$ is $1$ for ridge regression and is $4$ for logistic regression. The proximal operators used in the primal - dual algorithms have closed form solutions for ridge regression. That is, $Prox_{g}^\tau(x) = \frac{1}{1 + \tau\mu}$ and $Prox_{f^*}^\sigma(y) =\frac{y - \sigma b}{1 + \sigma}$. In logistic regression, the approximate proximal operator of $f^*$ is obtained by running Newton's method till some tolerance on the accuracy is achieved or a maximum of $100$ iterations is reached. Note that the dominant cost in computing the gradients or proximal operators is the cost of computing the matrix vector products $Ax$ and $A^*y$ which are of the order $O(Nd)$ and the cost of performing Newton's method to obtain the proximal operator is of order $N$ times the maximum number of iterations $t$. Therefore, when $t < d$ one can ignore the additional cost of performing Newton's method. We use online RNA on GD, Nesterov and PDGM with a fixed window size $m = 10$ and set $\lambda = 10^{-8}\|R^TR\|_2$. As discussed in Section \ref{s:algos}, RNA can be applied only with specific choices of the step-length parameters in the case of primal-dual methods. In the case of smooth problems, we observe that the choice $\tau = \frac{1}{\|A\|}$ and $\sigma=\frac{1}{\|A\|}$ yields stability for applying RNA on PDGM. We note this choice is not an optimal choice and one can improve the results by suitably tuning these parameters. Figure \ref{fig:madelon_nonsym_quad1} shows the performance of different variants of the primal-dual algorithms on ridge regression problems for two different regularization constants. We observe that there is no significant difference in the performance of the method with the momentum term $(\theta)$ as compared to the one with no momentum term. We also observe that although the choice of the steplength parameters mentioned above have consistent performance across different problems, the improvements obtained with RNA are not very significant. However, choosing $\sigma = \tau = 1/\|A\|$ and applying RNA to the PDGM has consistently outperformed all other variants. This is in consistent with theoretical observations made in Section \ref{s:algos} that one can find optimal steplength parameters for which RNA is stable and obtains the optimal performance. \begin{figure} \caption{Quadratic loss on the Madelon \citep{guyon2003design}. Left : $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of online RNA with other variants of primal-dual gradient methods. } \label{fig:madelon_nonsym_quad1} \end{figure} Figure \ref{fig:madelon_nonsym_quad2} compares the performance of primal-dual algorithms with other well know algorithms on ridge regression problems. We observe that Nesterov's accelerated gradient method and primal-dual gradient method consistently outperformed gradient descent as suggested by the theory as these methods achieve the optimal rates. The RNA variants of gradient descent and primal-dual methods are competitive and outperform their base algorithms. \begin{figure} \caption{Quadratic loss on the Madelon \citep{guyon2003design}. Left : $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of online RNA on primal-dual gradient methods with other first-order algorithms.} \label{fig:madelon_nonsym_quad2} \end{figure} Figure \ref{fig:madelon_nonsym_logistic1} shows the performance of the methods on logistic regression problems. We observe that the RNA variants have substantially improved the performance of the base algorithms. The L-BFGS method with Armijo backtracking line-search has the optimal performance across different problems and the RNA variants are competitive to this method. \begin{figure} \caption{Logistic loss on the Madelon \citep{guyon2003design}. Left : $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of online RNA on primal-dual gradient methods with other first-order algorithms.} \label{fig:madelon_nonsym_logistic1} \end{figure} We now illustrate the effects of RNA on Nesterov's accelerated gradient method. As discussed in Section~\ref{s:algos}, RNA is applied to sequence of iterates that are obtained after regular intervals, and the length of the interval needs to be chosen based on the problem characteristics. Figures \ref{fig:madelon_nonsym_quad_nest} and \ref{fig:madelon_nonsym_logistic_nest} compare the performance of RNA on Nesterov's sequence of iterates for various interval lengths $p$. We observe that the length of the interval has significant effect on the performance of the algorithm and this choice depends on the trade off between stability and speed of convergence. That is, the larger the interval length, higher the chance of getting an accelerated sequence but lower the speed of convergence. Higher powers are generally needed for highly ill-conditioned problems. Due to these difficulties, it is clear that for simple momentum terms, one should consider the symmetric part of these iterations and apply RNA on these sequences as discussed in Section \ref{s:momentum}. We report the results with this approach in the next section. \begin{figure} \caption{Quadratic loss on the Madelon \citep{guyon2003design}. Left : $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of online RNA on different iterate sampling rates, i.e. different powers of the operator for Nesterov's strongly convex acceleration algorithm.} \label{fig:madelon_nonsym_quad_nest} \end{figure} \begin{figure} \caption{Logistic loss on Madelon \citep{guyon2003design}. Left : $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of online RNA on different iterate sampling rates, i.e. different powers of the operator for Nesterov's strongly convex acceleration algorithm.} \label{fig:madelon_nonsym_logistic_nest} \end{figure} Lastly, we compare the performance of offline, restart and online versions of RNA on primal-dual gradient methods in Figure \ref{fig:madelon_nonsym_logistic_offline}. We observe that the improvement in the performance is more pronounced in the online version of RNA as compared to the offline version. \begin{figure} \caption{Logistic loss on the Madelon \citep{guyon2003design}. Left : $\mu = 10^{-2}$. Right : $\mu = 10^{2}$. Comparison of offline, restart and online variants of RNA on primal-dual gradient methods. } \label{fig:madelon_nonsym_logistic_offline} \end{figure} \subsubsection{Non-Smooth Problems} We consider denoising an image that is degraded by Gaussian noise using total variation. We refer the reader to \cite{chambolle2016introduction} for details about the total variation models. The optimization problem is given as, \begin{equation*} \min_{x} \|\nabla x\|_1 + \frac{\mu}{2}\|x - b\|^2 \end{equation*} where, \begin{equation*} \|\nabla x\|_1 = \sum_{i,j} |(\nabla x)_{i,j}| \qquad |(\nabla x)_{i,j}| = \sqrt{((\nabla x)_{i,j}^1)^2 + ((\nabla x)_{i,j}^2)^2} \end{equation*} and $b$ is a $256$ by $256$ noisy input image. This optimization problem is in the form \eqref{prob:primal} with $f(\nabla x) = \|\nabla x\|_1$ and $g(x) = \frac{\mu}{2}\|x - b\|^2$. The gradient operator $\nabla x$ is discretized by forward differencing (see \cite{chambolle2011first}). The convex conjugate of $f$ is an indicator function of the convex set $P$ where, \begin{equation*} P =\{p: \|p\|_{\infty} \leq 1\}, \qquad |p\|_{\infty} = \max_{i,j}|p_{i,j}|, \qquad |p_{i,j}| = \sqrt{(p_{i,j}^1)^2 + (p_{i,j}^2)^2} \end{equation*} and so the proximal operator is a point wise projection on to this set. That is $Prox_{f^*}^{\sigma}(p)_{i,j} =\frac{p_{i,j}}{\max(1,|p_{i,j}|)}$. We compare the performance of the two variants of primal-dual methods with RNA for two different noise levels $\zeta$ with two different regularization constants $\mu$. The step-length parameters are chosen adaptively at each iteration as follows: \begin{itemize} \item \textbf{PDGM} \begin{equation*} \hat{\theta}_{k} = \frac{1}{\sqrt{1 + 2\gamma\tau_k}}\qquad\sigma_{k+1} = \frac{\sigma_k}{\hat{\theta}_k} \qquad \tau_{k+1} = \tau _{k}\hat{\theta}_k \end{equation*} with $\gamma = 0.2 \mu$, $\theta = 0$, $\tau_0 = 0.02$, $\sigma_0 = \frac{4}{\tau_0\|\nabla\|^2}$ \item \textbf{PDGM + Momentum} \begin{equation*} \theta_{k} = \frac{1}{\sqrt{1 + 2\gamma\tau_k}}\qquad\sigma_{k+1} = \frac{\sigma_k}{\theta_k} \qquad \tau_{k+1} = \tau _{k}\theta_k \end{equation*} with $\gamma = 0.7 \mu$ and $\sigma_0 = \tau_0 = \frac{1}{\|\nabla\|}$ \end{itemize} with $\|\nabla \|^2 = 8$. These adaptive choices are the standard choices used in the literature and yield the optimal theoretical convergence rates for the momentum variants. We note that these parameters are not carefully fine-tuned to give the best performance for each variant but are chosen based on some simple observations. We used the offline RNA instead of online RNA as we consistently observed that the offline RNA is more robust in the high accuracy regime and the online variants needed some stability inducing techniques like linesearches. Moreover, for the online RNA, the improvement in the performance on these non-smooth problems is small and so the additional cost of solving the linear system is not well justified. \begin{figure} \caption{Images used in the experiments. Left: True data. Middle: Noisy data with Gaussian noise $\zeta = 0.1$. Right: Noisy data with Gaussian noise $\zeta =0.05$} \label{fig:lenaimage} \end{figure} Table \ref{tab:imgdenoise} reports the number of iterations required for the distance between the primal function value and the optimal primal function value to be below certain accuracy. We observe that the PDGM + RNA has consistently outperformed the PDGM and its' momentum variant for all the accuracies. \begin{table}[h!t] \centering \renewcommand{1.2}{1.2} \begin{tabular}{c|ccc|ccc} & \multicolumn{3}{c|}{$\zeta = 0.1, \mu = 8$} & \multicolumn{3}{c}{$\zeta = 0.05, \mu = 16$}\\ & $\epsilon=10^{-2}$ & $\epsilon=10^{-4}$ & $\epsilon=10^{-6}$ & $\epsilon=10^{-2}$ & $\epsilon=10^{-4}$ & $\epsilon=10^{-6}$ \\ \hline PDGM & 488 & 1842 & 7146 & 257 & 943 & 3706 \\ PDGM + Momentum & 377 & 1744 & 6813 & 226 & 921 & 3879 \\ PDGM + offlineRNA & \textbf{221} & \textbf{1151} & \textbf{5801} & \textbf{141} & \textbf{671} & \textbf{3241} \\ \end{tabular} \caption{Number of iterations required for the primal accuracy to be below $\epsilon$ on the images shown in Figure \ref{fig:lenaimage} using primal-dual gradient methods.} \label{tab:imgdenoise} \end{table} \oldsection*{Acknowledgements} The authors are very grateful to Lorenzo Stella for fruitful discussions on acceleration and the Chambolle-Pock method. AA is at CNRS \& d\'epartement d'informatique, \'Ecole normale sup\'erieure, UMR CNRS 8548, 45 rue d'Ulm 75005 Paris, France, INRIA and PSL Research University. The authors would like to acknowledge support from the {\em ML \& Optimisation} joint research initiative with the {\em fonds AXA pour la recherche} and Kamet Ventures, as well as a Google focused award. DS was supported by a European Union Seventh Framework Programme (FP7- PEOPLE-2013-ITN) under grant agreement n.607290 SpaRTaN. RB was a PhD student at Northwestern University at the time this work was completed and was supported by Department of Energy grant DE-FG02-87ER25047 and DARPA grant 650-4736000-60049398. {\small \bibsep 1ex } \end{document}
arXiv
Fiber bundle construction theorem In mathematics, the fiber bundle construction theorem is a theorem which constructs a fiber bundle from a given base space, fiber and a suitable set of transition functions. The theorem also gives conditions under which two such bundles are isomorphic. The theorem is important in the associated bundle construction where one starts with a given bundle and surgically replaces the fiber with a new space while keeping all other data the same. Formal statement Let X and F be topological spaces and let G be a topological group with a continuous left action on F. Given an open cover {Ui} of X and a set of continuous functions $t_{ij}:U_{i}\cap U_{j}\to G$ defined on each nonempty overlap, such that the cocycle condition $t_{ik}(x)=t_{ij}(x)t_{jk}(x)\qquad \forall x\in U_{i}\cap U_{j}\cap U_{k}$ holds, there exists a fiber bundle E → X with fiber F and structure group G that is trivializable over {Ui} with transition functions tij. Let E′ be another fiber bundle with the same base space, fiber, structure group, and trivializing neighborhoods, but transition functions t′ij. If the action of G on F is faithful, then E′ and E are isomorphic if and only if there exist functions $t_{i}:U_{i}\to G$ such that $t'_{ij}(x)=t_{i}(x)^{-1}t_{ij}(x)t_{j}(x)\qquad \forall x\in U_{i}\cap U_{j}.$ Taking ti to be constant functions to the identity in G, we see that two fiber bundles with the same base, fiber, structure group, trivializing neighborhoods, and transition functions are isomorphic. A similar theorem holds in the smooth category, where X and Y are smooth manifolds, G is a Lie group with a smooth left action on Y and the maps tij are all smooth. Construction The proof of the theorem is constructive. That is, it actually constructs a fiber bundle with the given properties. One starts by taking the disjoint union of the product spaces Ui × F $T=\coprod _{i\in I}U_{i}\times F=\{(i,x,y):i\in I,x\in U_{i},y\in F\}$ and then forms the quotient by the equivalence relation $(j,x,y)\sim (i,x,t_{ij}(x)\cdot y)\qquad \forall x\in U_{i}\cap U_{j},y\in F.$ The total space E of the bundle is T/~ and the projection π : E → X is the map which sends the equivalence class of (i, x, y) to x. The local trivializations $\phi _{i}:\pi ^{-1}(U_{i})\to U_{i}\times F$ are then defined by $\phi _{i}^{-1}(x,y)=[(i,x,y)].$ Associated bundle Let E → X a fiber bundle with fiber F and structure group G, and let F′ be another left G-space. One can form an associated bundle E′ → X with a fiber F′ and structure group G by taking any local trivialization of E and replacing F by F′ in the construction theorem. If one takes F′ to be G with the action of left multiplication then one obtains the associated principal bundle. References • Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9. • Steenrod, Norman (1951). The Topology of Fibre Bundles. Princeton: Princeton University Press. ISBN 0-691-00548-6. See Part I, §2.10 and §3.
Wikipedia
\begin{document} \newcommand{\begin{theorem}}{\begin{theorem}} \newcommand{\end{theorem}}{\end{theorem}} \newcommand{\begin{definition}}{\begin{definition}} \newcommand{\end{definition}}{\end{definition}} \newcommand{\begin{proposition}}{\begin{proposition}} \newcommand{\end{proposition}}{\end{proposition}} \newcommand{\begin{proof}}{\begin{proof}} \newcommand{\end{proof}}{\end{proof}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\underline}{\underline} \newcommand{\begin{remark}}{\begin{remark}} \newcommand{\end{remark}}{\end{remark}} \newcommand{\begin{example}}{\begin{example}} \newcommand{\end{example}}{\end{example}} \newcommand{\begin{corollary}}{\begin{corollary}} \newcommand{\end{corollary}}{\end{corollary}} \newcommand{\begin{lemma}}{\begin{lemma}} \newcommand{\end{lemma}}{\end{lemma}} \newcommand{\begin{conjecture}}{\begin{conjecture}} \newcommand{\end{conjecture}}{\end{conjecture}} \newcommand{\begin{thmI}}{\begin{thmI}} \newcommand{\end{thmI}}{\end{thmI}} \newcommand{\begin{thmII}}{\begin{thmII}} \newcommand{\end{thmII}}{\end{thmII}} \newcommand{\begin{thmIII}}{\begin{thmIII}} \newcommand{\end{thmIII}}{\end{thmIII}} \newcommand{\begin{thma}}{\begin{thma}} \newcommand{\end{thma}}{\end{thma}} \newcommand{\begin{thmb}}{\begin{thmb}} \newcommand{\end{thmb}}{\end{thmb}} \newcommand{\begin{thmc}}{\begin{thmc}} \newcommand{\end{thmc}}{\end{thmc}} \newcommand{\begin{thmd}}{\begin{thmd}} \newcommand{\end{thmd}}{\end{thmd}} \newcommand{\begin{thme}}{\begin{thme}} \newcommand{\end{thme}}{\end{thme}} \newcommand{\begin{thmf}}{\begin{thmf}} \newcommand{\begin{thms}}{\begin{thms}} \newcommand{\end{thmf}}{\end{thmf}} \newcommand{\end{thms}}{\end{thms}} \newcommand{\begin{thmg}}{\begin{thmg}} \newcommand{\end{thmg}}{\end{thmg}} \newcommand{\begin{thmh}}{\begin{thmh}} \newcommand{\end{thmh}}{\end{thmh}} \newcommand{\begin{thmi}}{\begin{thmi}} \newcommand{\end{thmi}}{\end{thmi}} \newcommand{\begin{thmj}}{\begin{thmj}} \newcommand{\end{thmj}}{\end{thmj}} \newcommand{\begin{thmk}}{\begin{thmk}} \newcommand{\end{thmk}}{\end{thmk}} \newcommand{\begin{thml}}{\begin{thml}} \newcommand{\end{thml}}{\end{thml}} \newcommand{\begin{thmm}}{\begin{thmm}} \newcommand{\end{thmm}}{\end{thmm}} \newcommand{\begin{lema}}{\begin{lema}} \newcommand{\end{lema}}{\end{lema}} \newcommand{\begin{lemb}}{\begin{lemb}} \newcommand{\end{lemb}}{\end{lemb}} \newcommand{\begin{lemc}}{\begin{lemc}} \newcommand{\end{lemc}}{\end{lemc}} \newcommand{\begin{lemd}}{\begin{lemd}} \newcommand{\end{lemd}}{\end{lemd}} \newcommand{\begin{leme}}{\begin{leme}} \newcommand{\end{leme}}{\end{leme}} \newcommand{\begin{lemf}}{\begin{lemf}} \newcommand{\end{lemf}}{\end{lemf}} \newcommand{\begin{lemg}}{\begin{lemg}} \newcommand{\end{lemg}}{\end{lemg}} \newcommand{\begin{lemh}}{\begin{lemh}} \newcommand{\end{lemh}}{\end{lemh}} \newcommand{\begin{ra}}{\begin{ra}} \newcommand{\end{ra}}{\end{ra}} \newcommand{\begin{rb}}{\begin{rb}} \newcommand{\end{rb}}{\end{rb}} \newcommand{\begin{rc}}{\begin{rc}} \newcommand{\end{rc}}{\end{rc}} \newcommand{\begin{rdd}}{\begin{rdd}} \newcommand{\end{rdd}}{\end{rdd}} \newcommand{\begin{ree}}{\begin{ree}} \newcommand{\end{ree}}{\end{ree}} \newcommand{\begin{rff}}{\begin{rff}} \newcommand{\end{rff}}{\end{rff}} \newcommand{\begin{rgg}}{\begin{rgg}} \newcommand{\end{rgg}}{\end{rgg}} \newcommand{\begin{proofI}}{\begin{proofI}} \newcommand{\end{proofI}}{\end{proofI}} \newcommand{\begin{proofII}}{\begin{proofII}} \newcommand{\end{proofII}}{\end{proofII}} \newcommand{\begin{proofIII}}{\begin{proofIII}} \newcommand{\end{proofIII}}{\end{proofIII}} \def\displaystyle{\displaystyle} \def{\Gamma}} \def\gam{{\gamma}{{\Gamma}} \def\gamma{\gamma} \def\delta{\delta} \def\phi{\phi} \def\phi{\phi} \def{\varphi}{{\varphi}} \def\tau{\tau} \def\varepsilon{\varepsilon} \def\varrho{\varrho} \def\Omega{\Omega} \def\partial{\partial} \def{\bf D}} \def\bE{{\bf E}} \def\bS{{\bf S}{{\mathbf D}} \defB{B} \defB{B} \def\begin{proposition} {\overline} \def\ovp{{\overline p}\phi} \def{\mathbb A}} \def\bbD{{\mathbb D}{{\mathbb A}} \def\bbD{{\mathbb D}} \def{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H}{{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H}} \def{\mathbb L}} \def\bbM{{\mathbb M}} \def\bbN{{\mathbb N}{{\mathbb L}} \def\bbM{{\mathbb M}} \def\bbN{{\mathbb N}} \def{\mathbb O}} \def\bbP{{\mathbb P}} \def\bbQ{{\mathbb Q}{{\mathbb O}} \def\bbP{{\mathbb P}} \def\bbQ{{\mathbb Q}} \def{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}{{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}} \def{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}} \def\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C{{\mathcal A}} \def\cB{{\mathcal B}} \def\cC{{\mathcal C}} \def{\mathcal D}} \def\cE{{\mathcal E}} \def\mathcal F} \def\cG{\mathcal G} \def\cI{\mathcal I{{\mathcal F}{{\mathcal D}} \def\cE{{\mathcal E}} \def\mathcal F} \def\cG{\mathcal G} \def\cI{\mathcal I{{\mathcal F}} \def{\mathcal G}} \def\cH{{\mathcal H}} \def\cJ{{\mathcal J}{{\mathcal G}} \def\cH{{\mathcal H}} \def\cJ{{\mathcal J}} \def\mathcal P{{\mathcal P}} \def\cT{{\mathcal T}} \def\cU{{\mathcal U}} \def{\mathcal V}} \def\cW{{\mathcal W}{{\mathcal V}} \def\cW{{\mathcal W}} \def\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C{\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C} \def\mathscr D} \def\sE{\mathscr E{\mathscr D} \def\sE{\mathscr E} \def\mathscr F} \def\sG{\mathscr G} \def\sI{\mathscr I{\mathscr F} \def\sG{\mathscr G} \def\sI{\mathscr I} \def\mathscr L} \def\sM{\mathscr M{\mathscr L} \def\sM{\mathscr M} \def\mathscr N} \def\sO{\mathscr O{\mathscr N} \def\sO{\mathscr O} \def\mathscr P} \def\sR{\mathscr R{\mathscr P} \def\sR{\mathscr R} \def\mathscr S} \def\sS{\mathscr T{\mathscr S} \def\mathscr S} \def\sS{\mathscr T{\mathscr T} \def\mathscr U} \def\sV{\mathscr V} \def\sW{\mathscr W{\mathscr U} \def\sV{\mathscr V} \def\sW{\mathscr W} \def\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z{\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z} \def{\tt A}} \def\tB{{\tt B}} \def\tC{{\tt C}{{\tt A}} \def\tB{{\tt B}} \def\tC{{\tt C}} \def{\tt D}} \def\tE{{\tt E}{{\tt D}} \def\tE{{\tt E}} \def{\tt Q}} \def\tR{{\tt R}} \def\tS{{\tt S}{{\tt Q}} \def\tR{{\tt R}} \def\tS{{\tt S}} \def{\tt T}{{\tt T}} \def{\mathfrak A}} \def\fB{{\mathfrak B}} \def{\mathfrak C}{{\mathfrak C}{{\mathfrak A}} \def\fB{{\mathfrak B}} \def{\mathfrak C}{{\mathfrak C}} \def{\mathfrak D}} \def\fE{{\mathfrak E}} \def\fF{{\mathfrak F}{{\mathfrak D}} \def\fE{{\mathfrak E}} \def\fF{{\mathfrak F}} \def{\mathfrak W}} \def\fX{{\mathfrak X}} \def\fY{{\mathfrak Y}{{\mathfrak W}} \def\fX{{\mathfrak X}} \def\fY{{\mathfrak Y}} \def{\mathfrak Z}{{\mathfrak Z}} \def{\mathbf e}} \def\bh{{\mathbf h}{{\mathbf e}} \def\bh{{\mathbf h}} \def{\mathbf i}} \def\bj{{\mathbf j}{{\mathbf i}} \def\begin{conjecture}{{\mathbf j}} \def{\mathbf n}} \def\bu{{\mathbf u}} \def\bv{{\mathbf v}{{\mathbf n}} \def\bu{{\mathbf u}} \def\bv{{\mathbf v}} \def{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}} \def{\mathbf 1}} \def\co{\complement{{\mathbf 1}} \def\co{\complement} \def{\bf A}} \def\bB{{\bf B}} \def\bC{{\bf C}{{\bf A}} \def\bB{{\bf B}} \def\bC{{\bf C}} \def{\bf D}} \def\bE{{\bf E}} \def\bS{{\bf S}{{\bf D}} \def\bE{{\bf E}} \def\bS{{\bf S}} \def{\bf{AH}}} \def\bBH{{\bf{BH}}} \def\bCH{{\bf{CH}}{{\bf{AH}}} \def\bBH{{\bf{BH}}} \def\bCH{{\bf{CH}}} \def{\bf{DH}}} \def\bEH{{\bf{EH}}} \def\bSH{{\bf{SH}}{{\bf{DH}}} \def\bEH{{\bf{EH}}} \def\bSH{{\bf{SH}}} \def{\mathbf Z}{{\mathbf Z}} \def{\mbox{\boldmath${\mu}$}}{{\mbox{\boldmath${\mu}$}}} \def{\mbox{\boldmath${\nu}$}}{{\mbox{\boldmath${\nu}$}}} \def{\mbox{\boldmath${\Phi}$}}{{\mbox{\boldmath${\Phi}$}}} \def{\mathfrak C}{{\mathfrak C}} \def{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C}{{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C}} \def{\rm D}} \def\rE{{\rm E}} \def\rF{{\rm F}{{\rm D}} \def\rE{{\rm E}} \def\rF{{\rm F}} \def{\rm G}{{\rm G}} \def{\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}{{\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}} \def{\rm S}} \def\rT{{\rm T}} \def\rV{{\rm V}{{\rm S}} \def\rT{{\rm T}} \def\rV{{\rm V}} \def{\rm d}} \def\re{{\rm e}} \def\rn{{\rm n}{{\rm d}} \def\re{{\rm e}} \def\rn{{\rm n}} \def{\rm s}} \def\rt{{\rm t}} \def\ru{{\rm u}{{\rm s}} \def\rt{{\rm t}} \def\ru{{\rm u}} \def{\rR\rD}{{\rR{\rm D}} \def\rE{{\rm E}} \def\rF{{\rm F}}} \def\overline} \def\ovp{{\overline p}{\overline} \def\ovp{{\overline p}} \def\end{proposition} {{\varnothing}} \def\circ{\circ} \def\widetilde} \def\wh{\wideha{\widetilde} \def\wh{\wideha} \def{\wt D}{{\widetilde} \def\wh{\wideha D}} \def{\wt m}} \def\wtn{{\wt n}} \def\wtk{{\wt k}{{\widetilde} \def\wh{\wideha m}} \def\wtn{{\widetilde} \def\wh{\wideha n}} \def\wtk{{\widetilde} \def\wh{\wideha k}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r}{\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r}} \def\begin{array}{cl}} \def\beall{\begin{array}{ll}{\begin{array}{cl}} \def\beall{\begin{array}{ll}} \def\begin{array}{lllll}{\begin{array}{lllll}} \def\begin{array}{cr}{\begin{array}{cr}} \def\end{array}{\end{array}} \def\begin{matrix}{\begin{matrix}} \def\end{matrix}{\end{matrix}} \def\displaystyle{\displaystyle} \def\mathcal F} \def\cG{\mathcal G} \def\cI{\mathcal I{\mathcal F} \def{\mathcal G}} \def\cH{{\mathcal H}} \def\cJ{{\mathcal J}{\mathcal G} \def\cI{\mathcal I} \def\mathcal L} \def\cO{\mathcal O{\mathcal L} \def\cO{\mathcal O} \def\mathcal P{\mathcal P} \def{\Gamma}} \def\gam{{\gamma}{{\Gamma}} \def\gam{{\gamma}} \def{\Delta}} \def\del{{\delta}} \def\odel{{\overline\delta}{{\Delta}} \def\del{{\delta}} \def\odel{{\overline\delta}} \def{\varphi}{{\varphi}} \def{\epsilon}} \def\veps{{\varepsilon}{{\epsilon}} \def\veps{{\varepsilon}} \def{\varrho}} \def\vpi{{\varpi}{{\varrho}} \def\vpi{{\varpi}} \def{\Lambda}} \def\lam{{\lambda}{{\Lambda}} \def\lam{{\lambda}} \def{\Omega}} \def\om{{\omega}{{\Omega}} \def\om{{\omega}} \def\partial{\partial} \def\complement{\complement} \def{\mathbf Z}{{\mathbf Z}} \def{\rm Supp}{{\rm Supp}} \defD{D} \defB{B} \defB{B} \def\begin{proposition} {\overline \phi} \def{\mathchar"0\hexnumber@\msafam7B}{{\mathchar"0\hexnumber@\msafam7B}} \def\end{proposition} {{\varnothing}} \def\circ{\circ} \def\iota{\iota} \def{\rm{Int}}{{\rm{Int}}} \def{\rm{Ext}}{{\rm{Ext}}} \def\centerline{\centerline} \def\raise-1.5ex\hbox{${{\displaystyle \sum}^{\sharp} \atop {\scriptstyle \{\g_m\}^{\ex} \in V}}$}{\raise-1.5ex\hbox{${{\displaystyle \sum}^{\sharp} \atop {\scriptstyle \{\g_m\}^{\ex} \in V}}$}} \makeatletter \defhtbp{htbp} \makeatother \def0.86602540378443864676372317075294{0.86602540378443864676372317075294} \def\hexagongrid{ \draw [yscale=sqrt(3/4), xslant=0.5] (-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18) grid (4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18); \draw [yscale=sqrt(3/4), xslant=-0.5] (-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18) grid (4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18); } \def\rectangulargrid{ \clip[yscale=sqrt(3/4)] (-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18) rectangle (4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18); \draw [yscale=sqrt(3/4), xslant=0.5] (-2 * 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18) grid (2 * 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18); \draw [yscale=sqrt(3/4), xslant=-0.5] (-2 * 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18) grid (2 * 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18); } \def\sublattice{ \clip[yscale=sqrt(3/4), xslant=0.5] (0, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18+18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1) -- (4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18+18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, 0) -- (4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18+18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18-18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1) -- (0, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18-18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1) -- (-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18-18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, 0) -- (-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18-18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18+18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1) -- cycle; \foreach \x in {-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18,...,4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18} \foreach \y in {-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18,...,4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18} { \def-5} \def\rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2{2{\x * 1 + 0.5 * \x * 2 - 0.5 * \y * 2 + 0.5 * \y * 1} \def\rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2{\varrho* \x * 2 + \varrho* \y * 1 + \varrho* \y * 2} \shade[shading=ball, ball color=black] (-5} \def\rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2{2, \rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2) circle (.2); } } \def\sublatticeinrectangle{ \clip[yscale=sqrt(3/4)] (-18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, -18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1) rectangle (18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, 18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1); \foreach \x in {-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18,...,4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18} \foreach \y in {-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18,...,4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18} { \def-5} \def\rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2{2{\x * 1 + 0.5 * \x * 2 - 0.5 * \y * 2 + 0.5 * \y * 1} \def\rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2{\varrho* \x * 2 + \varrho* \y * 1 + \varrho* \y * 2} \shade[shading=ball, ball color=black] (-5} \def\rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2{2 + \dx + 0.5 * \dy, \rr * \x * 2 + \rr * \y * 1 + \rr * \y * 2 + \varrho* \dy) circle (.3); } } \def\inclinedgrid{ \draw [yscale=2 * sqrt(3/4), xslant=1 + 0.5 * 2, xscale=1 * 1 / 2 + 1 + 2 , yslant=-1 / 2, ultra thick] (-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, -4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18) grid (4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18, 4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18); } \def\rectsub-lattice{ \clip[yscale=sqrt(3/4)] (-18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, -18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1) rectangle (18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1, 18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1); \foreach \x in {-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18,...,4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18} \foreach \y in {-4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18,...,4} \def1{0} \def2{7} \def18} \def\n{9} \def\dx{1} \def\dy{1} \def\aa{2} \def\bb{1{18} { \filldraw[yscale=sqrt(3/4), xslant=0.5, \cc] (\ss * \x + \dx, \ss * \y + \dy) rectangle (\ss * \x + \ss - 1 + \dx, \ss * \y + \ss - 1 + \dy); \definecolor{gray1}{gray}{0.1} \definecolor{gray2}{gray}{0.2} \definecolor{gray3}{gray}{0.3} \definecolor{gray4}{gray}{0.4} \definecolor{gray5}{gray}{0.5} \definecolor{gray6}{gray}{0.6} \definecolor{gray7}{gray}{0.7} \definecolor{gray8}{gray}{0.8} \definecolor{gray9}{gray}{0.9} } } \def\FigureA1 {\begin{figure} \caption{Fragments of lattices ${\mathbb A}_2$ and ${\mathbb H}_2$} \label{Fig1} \end{figure}} \def\FigureB2 {\begin{figure}\label{Fig2} \end{figure}} \def\FigureC3 {\begin{figure}\label{Fig3} \end{figure}} \def\FigureD4 {\begin{figure}\label{Fig4} \end{figure}} \def\FigureE5 {\begin{figure}\label{Fig5} \label{D2=147} \end{figure}} \def\FigureF6 {\begin{figure}\label{Fig6} \end{figure}} \def\FigureG7{\begin{figure}\label{Fig7} \end{figure}} \def\FigureH8 {\begin{figure}\label{Fig8} \end{figure}} \def\FigureI9 {\begin{figure}\label{Fig9} \end{figure}} \def\FigureJ10 {\begin{figure} \caption{The template (black circles) and $D$-rhombuses (thick lines) on ${\mathbb A}} \label{Fig10} \end{figure}} \def\FigureK11 {\begin{figure} \caption{Templates: ${\varphi}$-correct (light-gray) and non-${\varphi}$-correct (medium- and dark-gray), on ${\mathbb A}} \label{Fig11} \end{figure}} \def\FigureL12 {\begin{figure}\label{Fig12} \end{figure}} \def\FigureM13 {\begin{figure}\label{Fig13} \label{Comp1} \label{Comp2} \end{figure}} \def\FigureN14 {\begin{figure}\label{Fig14} \label{Comp5} \label{Comp6} \label{Comp4} \end{figure}} \def\FigureO15{\begin{figure}\label{Fig15} \end{figure}} \def\FigureP16 {\begin{figure} \caption{Single and double $u^{-2}$-insertions for $D^2=49$ on ${\mathbb A}} \label{Fig16} \end{figure}} \def\FigureQ17 {\begin{figure} \caption{Single $u^{-2}$-insertions for $D^2=169$ on ${\mathbb A}} \label{Fig17} \end{figure}} \def\FigureR18 {\begin{figure} \caption{Double, triple and quadruple $u^{-2}$-insertions for $D^2=169$ on ${\mathbb A}} \label{Fig18} \end{figure}} \def\FigureS19 {\begin{figure} \caption{Single $u^{-2}$-insertions for $D^2=147$ on ${\mathbb A}} \label{Fig19} \end{figure}} \def\FigureT20 {\begin{figure} \caption{Double, triple and quadruple $u^{-2}$-insertions for $D^2=147$ on ${\mathbb A}} \label{Fig20} \end{figure}} \def\FigureU21 {\begin{figure}\label{Fig21} \end{figure}} \def\FigureV22 {\begin{figure}\label{Fig22} \end{figure}} \def\FigureW23 {\begin{figure}\label{Fig23} \end{figure}} \def\FigureX24 {\begin{figure}\label{Fig24} \end{figure}} \def\Fig15 {} \def\Fig15 {} \def\FigureY26 {\begin{figure}\label{Fig25} \end{figure}} \def\FigureZ27 {\begin{figure} \caption{A deletable vertex of type $2\pi/3$ (a large black ball), in a horizontal PGS (a) and in an inclined PGS (b), for $D^2=49$.} \label{Fig26} \end{figure}} \def\Figurea1{} \def\Figureb2{} \def\WFigure25 {\begin{figure}\label{Fig24A} \end{figure}} \def\XFigure24A {\begin{figure}\label{Fig24A} \end{figure}} \def\YFigure25{\begin{figure}\label{Fig25A} \end{figure}} \def\ZFigure26 {\begin{figure}\label{Fig26A} \end{figure}} \title{\bf High-density hard-core model\\ on triangular and hexagonal lattices} \author{\bf A. Mazel$^1$, I. Stuhl$^2$, Y. Suhov$^{2-4}$} \date{} \footnotetext{2010 {\em Mathematics Subject Classification:\; primary 60G60, 82B20, 82B26}} \footnotetext{{\em Key words and phrases:} triangular lattice, hexagonal lattice, hard-core configuration, disk-packing, extreme Gibbs measure, high-density/large fugacity, periodic ground state, Delaunay triangulation, minimal re-distributed area of a triangle, maximally-dense sub-lattice, maximally-dense non-sub-lattice configuration, contour representation of the partition function, Peierls bound, Pirogov-Sinai theory, dominance, local repelling forces, computer-assisted enumeration, sliding \noindent $^1$ AMC Health, New York, NY, USA;\;\; $^2$ Math Dept, Penn State University, PA, USA;,\;\; $^3$ DPMMS, University of Cambridge and St John's College, Cambridge, UK,\;\; $^4$ IITP RAS, Moscow, RF} \maketitle \begin{abstract} We perform a rigorous study of the Gibbs statistics of high-density hard-core random configurations on a unit triangular lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and a unit honeycomb graph $\bbH_2$, for any value of the (Euclidean) repulsion diameter $D>0$. Only attainable values of $D$ are relevant, for which $D^2=a^2+b^2+ab$, $a, b \in\mathbb{Z}$ (L\"oschian numbers). Depending on arithmetic properties of $D^2$, we identify, for large fugacities, the pure phases (extreme Gibbs measures) and specify their symmetries. The answers depend on the way(s) an equilateral triangle of side-length $D$ can be inscribed in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$. On ${\mathbb A}} \def\bbD{{\mathbb D}_2$, our approach works for all attainable $D^2$; on $\bbH_2$ we have to exclude $D^2 = 4, 7, 31, 133$, where a sliding phenomenon occurs, similar to that on a unit square lattice $\bbZ^2$. For all values $D^2$ apart from the excluded ones we prove the existence of a first-order phase transition where the number of co-existing pure phases grows at least as $O(D^2)$. The proof is based on the Pirogov--Sinai theory which requires non-trivial verifications of key assumptions: finiteness of the set of periodic ground states and the Peierls bound. To establish the Peierls bound, we develop a general method based on the concept of a re-distributed area for Delaunay triangles. Some of the presented proofs are computer-assisted. As a by-product of the ground state identification, we solve the disk-packing problem on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ for any value of the disk diameter $D$. \end{abstract} \section{A summary of results}\label{Sec1} \subsection{Introduction}\label{SubSec1.1} We analyze properties of random configurations of hard disks of a given diameter $D$, with centers in a unit triangular lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and a unit honeycomb lattice$^{1)}$\footnote{$^{1)}$Strictly speaking, $\bbH_2$ is not a lattice in the algebraic sense. However, we follow a physical tradition and refer to $\bbH_2$ as a lattice.} $\bbH_2$, both embedded in ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$. It is also convenient to consider $\bbH_2$ as a subset in ${\mathbb A}} \def\bbD{{\mathbb D}_2$. Cf. Figure 1. Together with a unit square lattice $\bbZ^2$, these are popular examples of `regular' planar graphs for a number of probabilistic models, including percolation and phase transitions. A separate place belongs to a model of hard disks in ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$. Historically, the hard-core model emerged about a 150 years ago in an attempt to describe a system of atoms, molecules or granules, as represented by rigid spheres of a given diameter; a famous example of its application was the Boltzmann equation. Since then, the model proliferated in a number of pure and applied mathematical disciplines and generated a substantial literature. A comprehensive discussion of various aspects of the hard-core model and its applications (including elements of criticism) can be found, e.g., in \cite{CKPUZ}, \cite{PaZ}, \cite{PeS}, \cite{BKZZ}, \cite{KMRTSZ}. The study of lattice hard-core (H-C) models started with the result by Dobrushin \cite{Dob} about non-uniqueness of pure phases on $\bbZ^d$, $d\geq 2$, with a nearest-neighbor exclusion in a high-density/large fugacity regime. The paper \cite{HeP} established non-uniqueness of a pure phase for a particular sequence of exclusion distances on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, without specifying the pure phases. We also note the (rather remarkable) result of paper \cite{Ba} where a critical value of fugacity $\displaystyle u_{\rm{cr}}=\frac{1}{2}(11+5{\sqrt 5})$ has been calculated, for the H-C diameter $D={\sqrt 3}$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. This, apparently, indicates an upper limiting value for fugacity $u$ for the low-density regime where a pure phase is unique and given via a polymer expansion around an empty configuration. The paper \cite{JaL} establishes existence of order-disorder phase transitions for a class of `non-sliding' H-C lattice particle systems on a lattice in two or more dimensions. \vskip-35pt \FigureA1 The present paper continues and extends the works \cite{Dob} and \cite{HeP} in a general setting. A detailed study of the H-C model on $\bbZ^2$ has been performed in \cite{MSS1}. We analyze the {\it ground states} and {\it Gibbs} or {\it DLR} measures for the (H-C) model on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$, in a regime of high-density/large-fugacity. The assumption that the fugacity $u$ is large is adopted throughout the paper without stressing it every time again. The analysis of Gibbs measures is reduced to {\it extreme Gibbs measure} (EGM, or $D$-EGM when dependence on $D$ is emphasized). An EGM is interpreted as a {\it pure phase} in the phase diagram of the model. Formal definitions of the notions used in Introduction are provided in Sections \ref{Sec2}. The H-C exclusion is imposed in the Euclidean ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$-metric and is defined by the H-C {\it exclusion diameter} $D$: the shortest allowed distance between two occupied sites. Without loss of generality we assume throughout the paper that the value $D$ (or $D^2$) is {\it attainable}, i.e., there are pairs of sites in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ with the distance exactly $D$ between them. The set of attainable values is the same for ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ and is characterized through a {\it L\"oschian decomposition} of the number $D^2$. Referring to $D^2$ rather than to $D$ is more convenient since $D^2$ is a positive integer. The problem of identification of the EGM structure is reduced -- via the {\it Pirogov--Sinai} (PS) {\it theory} \cite{PiS}, \cite{Za} -- to an analysis of {\it periodic ground states} (PGSs or $D$-PGSs), including a verification of the {\it Peierls bound}. Informally, speaking, the outcome of the PS theory is that every EGM is generated by a PGS. The inverse is not always true: there may be PGSs that do not generate EGMs. The PGSs that generate EGMs are referred to as {\it dominant} (stable in the terminology of \cite{Za}). Our results can be briefly summarized as follows. \begin{description} \item[{\rm{(i)}}] On both ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ we describe the grounds states (periodic and non-periodic) for all values of $D$ and fugacity $u>1$. See Remark 4.1. This solves the disk-packing problem on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ for any disk diameter $D$; cf.\cite{CD}. The PGSs are naturally partitioned into equivalence classes defined by lattice symmetries (shifts and reflections). Apart from 13 values of $D$ on $\bbH_2$, all $D$-PGSs are constructed from sub-lattices. \item[{\rm{(ii)}}] For the PGSs we establish a Peierls bound where the Peierls constant grows with $u$ and decreases with $D$. See Lemmas \ref{Lem5.1} and \ref{Lem5.4}. \item[{\rm{(iii)}}] The structure of $D$-EGMs (the phase diagram) inherits that of $D$-PGSs. First, for at least one PGS-equivalence class, each PGS from the class generates a distinct EGM. That is, we have a first-order phase transition. This fact is proven for every $D$, except for 4 values of $D$ on $\bbH_2$. See Theorem III in Section \ref{SubSec3.2}. \item[{\rm{(iv)}}] In the case where the PGS-equivalence class is unique, we obtain a complete phase diagram. The sets of values $D$ with a unique PGS-equivalence class on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ are infinite and explicitly described. The number of $D$-EGMs in this case grows as $O(D^2)$ and is further specified. See Theorems 1, 2, 7, 8, 11, 12 in Section \ref{Sec3}. \item[{\rm{(v)}}] The sets of values $D$ for which the PGS-equivalence class is non-unique is also infinite and explicitly described, on both ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$. The number of $D$-EGMs in this case grows at least as $O(D^2)$. The question which classes are dominant, i.e., generate EGMs requires an additional analysis. We conduct such analysis on a number of values $D$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ exploring various emerging possibilities. See Theorems 4, 5, 6, 10 in Section \ref{Sec3}. \item[{\rm{(vi)}}] We establish that a phenomenon of {\it sliding} occurs only on $\bbH_2$, for $D^2= 4, 7, 31, 133$. See Lemmas 4.7 - 4.10. For these values, the structure of the phase diagram remains open. Cf. Section \ref{Sec8}. The phenomenon of sliding was first discovered by Dobrushin (1968) on $\bbZ^2$. Cf. \cite{MSS1}. \end{description} \subsection{The PGSs and EGMs}\label{SubSec1.2} The structure of PGSs on both ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ (and also on $\bbZ^2$: cf. \cite{MSS1}) depends on arithmetic properties of the number $D^2$. Moreover, the image of a $D$-PGS under a {\it symmetry} (of ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$), i.e., a lattice shift or reflection, is also a $D$-PGS, and one can speak about the corresponding {\it equivalence classes} of PGSs with respect to ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-symmetries. This is important since dominance is a class property: if an equivalence class contains a dominant PGS then all PGSs from the class are dominant. Referring to the formation of PGS-equivalence classes, the entire set of attainable values of $D$ (or $D^2$) is divided into disjoint subsets. For lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ we consider two subsets of values $D^2$ (both infinite), called Classes TA and TB. On $\bbH_2$ we deal with six subsets of values $D^2$ called Classes HA, HB, HC (infinite) and HD, HE, HS (finite). Here T stands for triangular and H for honeycomb. These subsets are further divided, regarding specific aspects of the structure of PGSs and EGMs. Cf. Sections \ref{SubSec1.3}, \ref{SubSec1.4}. Physically speaking, the above subsets are characterized by a possibility (or possibilities or a lack of them) to inscribe an equilateral triangle of side-length $D$ in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$. In the case of ${\mathbb A}} \def\bbD{{\mathbb D}_2$ this is always possible, but the inscription may be non-unique. For $\bbH_2$ it is not always possible, which leads to a more complicated partition of the values $D^2$. On ${\mathbb A}} \def\bbD{{\mathbb D}_2$, the $D$-PGSs are constructed from $D$-{\it sub-lattices}, i.e., sub-lattices for which a fundamental parallelogram is a $D$-rhombus formed by $2$ equilateral triangles of side-length $D$ ($D$-{\it triangles}, for short). If $D^2$ is from Class TA, the $D$-sub-lattice is unique, and the PGSs form a single equivalence class. Consequently, for Class TA we establish a complete phase diagram in the large-fugacity regime. In this regime each PGS applied as a boundary condition generates a distinct $D$-EGM, and all $D$-EGMs are obtained in this way; cf. Theorems 1, 2 in Section \ref{SubSec3.3}. In other words, for values $D$ from Class TA all PGSs are dominant. Also, the EGMs inherit symmetries between their generating PGSs. Similar properties hold true for Class HA on $\bbH_2$; cf. Theorems 7, 8 in Section \ref{SubSec3.5}. Classes TA and HA yield the simplest cases of the PGS/EGM analysis on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$. For $D^2$ from Classes TB or HB, the $D$-PGSs are still constructed from $D$-sub-lattices, but there are multiple PGS-equivalence classes. There is always a dominant equivalence class (we conjecture that it is unique), but the problem of identifying which classes are dominant is more involved. Here we solve it for some specific values of $D$, indicating various emerging possibilities for a plausible general answer. Arithmetically, Classes HA and HB are formed by the values $D^2$ from Classes TA and TB divisible by 3. Next, Classes HC, HD, HE, and HS on $\bbH_2$ consist of values $D^2$ non-divisible by $3$. These classes stem from the above-mentioned features of $\bbH_2$, about a luck of possibility to inscribe a $D$-triangle for some values of $D$. Class HC (which covers a bulk of values of $D$ on $\bbH_2$) is determined by the condition that $D^2$ is not divisible by 3 and is non-exceptional in a sense made precise below. For values $D$ from Class HC we look for the attainable $D^*>D$ which (i) has $(D^*)^2$ divisible by 3 (i.e., falls in Class HA or HB) and (ii) $D^*$ is nearest to $D$ with this property. Then the PGSs and EGMs for $D^2$ are the same as for $(D^*)^2$, i.e., are constructed from $D^*$-sub-lattices. Classes HD, HE and HS are deemed exceptional and are dealt with on a case-by-case basis (with the help of a computer). Cf. Section \ref{SubSec1.4}. For these classes not all PGSs are constructed from $D$- or $D^*$-sub-lattices. (For $D^2$ from Class HD none of the PGSs is a sub-lattice.) In particular, Class HS consists of values $D^2= 4, 7, 31, 133$ which exhibit a phenomenon of sliding on $\bbH_2$. A similar phenomenon occurs on lattice $\bbZ^2$ as well; cf. \cite{MSS1}. For the values of $D^2$ with sliding, the PS theory is not applicable since the number of PGSs is infinite and -- more importantly -- the Peierls bound does not hold. Our conjecture is that the EGM for these values of $D$ is unique when fugacity $u$ is large enough (and, indeed, for all values $u>0$). We briefly comment on sliding on $\bbH_2$ in Section \ref{Sec8}. Cf. Section 2.2 in \cite{MSS1} where the similar problem is treated on lattice $\bbZ^2$. As was mentioned before, an attainable value $D^2$ admits a L\"oschian decomposition. It means that $D^2$ is a positive integer of the form $a^2 + b^2 + ab$ where $a$ and $b$ are integers. L\"oschian numbers arise naturally in this context as they are the norms of Eisenstein integers that form $\mathbb{A}_2$. The L\"{o}schian numbers are the sequence A003136 in OEIS, the on-line Encyclopedia of integer sequences and their initial list is: $$\beac 1,3,4,7,9,12,13,16,19,21,25,27,28,31,36,37,39,43,48,49,52,57,61,63,64,67,73,\\ 75,76,79,81,84,91,93,97,100,103,108,109,111,112,117,121,124,127,129,133,139,\\ 144,147,148,151,156,157,163,169,171,172,175,181,183,189,192,193,196,199,201,\\ 208,211,217,219,223,225,228,229,237,241,243,244,247,252,256,259,268,271,273, \\ 277,279,283,289,291,292,300,301,304,307,309,313,316,324,325,327,331,333,336,\\ 337,343,351,361,363,364,367,372,379,387,388,397,399,400,403,409,412,417,421,\\ 427,432,433,436,439,441,444,448,453,457,463,468,469,471,475,481,484,487,489. \end{array}$$ An equivalent characterization of a L\"oschian number is that its rational prime factorization must contain primes of the form $3v+2$, $v\in\bbZ_+$ in even powers (there is no restriction on factor 3 or primes of the form $3v+1$). We use some classical results regarding these integers which are presented in a convenient form in \cite{M, N}. A compendium of the related theory is given in the monograph \cite{CS}. The PGS identification and a specification of the Peierls-bound is done with the help of {\it Voronoi cells} (V-cells) or through the construction of Delaunay ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-triangles which {\it minimize the re-distributed area} (MRA-triangles). As was said, the Peierls bound is given in Theorem II from Section \ref{SubSec3.2}. In this paper we employ the approach based on MRA-triangles (also used in \cite{MSS1}), but for completeness provide a brief account of the V-cell method as well. Cf. Sections \ref{Sec4}, \ref{Sec5}. In Sections \ref{SubSec1.3} and \ref{SubSec1.4} we give a description of our results on PGSs for the H-C model on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$. \subsection{PGSs and EGMs on ${\mathbb A}} \def\bbD{{\mathbb D}_2$}\label{SubSec1.3} On ${\mathbb A}} \def\bbD{{\mathbb D}_2$ the situation is made easier by the above-mentioned fact that the PGSs are constructed from $D$-sub-lattices. That is, a PGS-equivalence class is determined either by a $D$-sub-lattice -- if it is reflection-invariant -- or by a pair of $D$-sub-lattices taken to each other by a reflection. Consequently, a PGS-class contains $D^2$ or $2D^2$ PGSs obtained from each other by lattice shifts and reflections. The value $D^2$ represents the number of sites in a $D$-rhombus which gives the number of different lattice shifts for PGSs. The number of PGS-equivalence classes is related to the number and structure of non-negative solutions to the equation $D^2=a^2+b^2+ab$. Accordingly, it is natural to extract the following classes of values of $D^2$. $\bullet$ Class TA1: $D^2$ is an integer whose prime decomposition contains (i) a factor $3$ in any power, (ii) primes of the form $3v+2$, in even powers, possibly zero, and (iii) no prime of the form $3v+1$. This happens iff $D^2=a^2$ or $3a^2$ where $a\in\bbN$ has only primes $3v+2$ in its prime decomposition. The first 40 values of $D^2$ falling in this category are $D^2 = $ 1, 3, 4, 9, 12, 16, 25, 27, 36, 48, 64, 75, 81, 100, 108, 121, 144, 192, 225, 243, 256, 289, 300, 324, 363, 400, 432, 484, 529, 576, 625, 675, 729, 768, 841, 867, 900, 972, 1024, 1089. $\bullet$ Class TA2: $D^2$ is an integer whose prime decomposition contains (i) a factor 3 in any power, (ii) primes of the form $3v+2$, in even powers, possibly zero, and (iii) a single prime of the form $3v+1$ (entering in power 1). This happens iff $D^2$ admits a unique decomposition as $a^2+b^2+ab$ (modulo the permutation of $a$ and $b$) and we have $a,b\in\bbN$, $a\neq b$. The first 40 values of $D^2$ from Class TA2 are 7, 13, 19, 21, 28, 31, 37, 39, 43, 52, 57, 61, 63, 67, 73, 76, 79, 84, 93, 97, 103, 109, 111, 112, 117, 124, 127, 129, 139, 148, 151, 156, 157, 163, 171, 172, 175, 181, 183, 189. $\bullet$ Class TA: the union of Classes TA1 and TA2. $\bullet$ Class TB: All remaining attainable values of $D$ (or $D^2$). Class TB consists of positive integers $D^2$ that contain (i) a factor 3 in any power, (ii) primes of the form $3v+2$, in even powers, possibly zero, and (iii) at least two primes of the form $3v+1$ (possibly, identical). It occurs iff $D^2$ admits a non-unique L\"oschian decomposition, i.e., there are more than 1 solutions $(a,b)$ to the Diophantine equation $D^2=a^2+b^2+ab$, with $a, b$ non-negative integers, again, modulo the permutation of $a$, $b$. The first 40 values of $D^2$ from Class TB are 49, 91, 133, 147, 169, 196, 217, 247, 259, 273, 301, 343, 361, 364, 399, 403, 427, 441, 469, 481, 507, 511, 532, 553, 559, 588, 589, 637, 651, 676, 679, 703, 721, 741, 763, 777, 784, 793, 817, 819. For $D^2$ from TA there is a single PGS-equivalence class, while for $D^2$ from TB the number of PGS-equivalence classes is greater than one. \vskip-40pt \FigureB2 It is convenient to refer to triangles with vertices in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ as ${\mathbb A}} \def\bbD{{\mathbb D}_2$-triangles. As we saw earlier, an important role is played by $D$-triangles. We will distinguish between 3 types of $D$-triangles: {\it horizontal}, with sides fitting ${\mathbb A}} \def\bbD{{\mathbb D}_2$, {\it vertical}, with sides perpendicular to constituent lines of ${\mathbb A}} \def\bbD{{\mathbb D}_2$, and {\it inclined}, covering the remaining cases. We will also use $D$-triangles in the plane ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$, for general diameters $D>0$. The above terminology is extended to the $D$-sub-lattices and $D$-PGSs: we speak of horizontal PGSs, vertical PGSs and inclined PGSs, respectively, on both ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$. The PGSs for Class TA1 are all horizontal when $D^2=a^2$ and all vertical when $D^2=3a^2$; for Class TA2 they are all inclined. As was said before, in Theorems 1, 2 from Section \ref{SubSec3.3} we prove that in the large-fugacity regime on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, there are exactly $D^2$ EGMs if $D$ is from Class TA1 and exactly $2D^2$ EGMs if $D$ is from Class TA2. In both cases, there is a single PGS-equivalence class which is dominant. \FigureC3 For every $D^2$ from Class TB we prove that at least one PGS-equivalence class generates $D$-EGMs. See Theorem 3 in Section \ref{SubSec3.4}. As we mentioned before, the structure of EGMs is defined by the property of dominance of PGSs. For the specific values $D^2 = 49, 147, 169$ we present a new technique that allows us to determine which PGS-class is dominant, via a specific count of density of {\it local excitations}. In the terminology from \cite{Za}, it is a specific analysis of {\it small contours}. See Theorems 4, 5, and 6 in Section \ref{SubSec3.4}. \subsection{PGSs and EGMs on $\bbH_2$}\label{SubSec1.4} On lattice $\bbH_2$ we identify the following pair-wise disjoint sets of values of $D$: HA1, HA2, HA (the union of HA1 and HA2), HB, HC (all infinite), HD, HE, HS (all finite). $\bullet$ Class HS: 4 values with {\it sliding} $D^2= 4, 7, 31, 133$; see Section \ref{Sec8}. $\bullet$ Class HA1: the values $D$ from the above Class TA1 such that $D^2$ is divisible by $3$. That is, $D^2=9b^2$ or $D^2=3b^2$ where $b\in\bbN$. The initial list of 30 such values has $D^2 =$ 3, 9, 12, 27, 36, 48, 75, 81, 108, 144, 192, 225, 243, 300, 324, 363, 432, 576, 675, 729, 768, 867, 900, 972, 1089, 1137, 1200, 1296, 1389, 1452. $\bullet$ Class HA2: the values $D$ from the above Class TA2 such that $D^2$ is divisible by $3$. The initial list of 30 such values has $D^2 =$ 21, 39, 57, 63, 84, 93, 111, 117, 129, 156, 171, 183, 189, 201, 219, 237, 252, 279, 291, 309, 327, 333, 336, 351, 372, 381, 387, 417, 444, 453. $\bullet$ Class HB: the values $D$ from the above Class TB such that $D^2$ is divisible by $3$. The initial list of 30 such values has $D^2 =$ 147, 273, 399, 441, 507, 588, 651, 741, 777, 819, 903, 1029, 1083, 1092, 1197, 1209, 1281, 1323, 1407, 1443, 1521, 1533, 1596, 1659, 1677, 1764, 1767, 1911, 1953, 2028. $\bullet$ Class HC: the remaining values of $D$, except for the values from Classes HD and HE below. Here the initial list of 30 values has $D^2=$ 19, 25, 37, 38, 43, 52, 61, 73, 76, 79, 84, 91, 100, 103, 109, 121, 124, 127, 139, 148, 151, 157, 163, 169, 172, 175, 181, 193, 196, 199. $\bullet$ Class HD: 9 values where $D^2=$ 1, 13, 16, 28, 49, 64, 97, 157, 256. $\bullet$ Class HE: 1 value $D^2=$ 67. In the analysis of the EGMs, the values $D$ from Class HS are disregarded. As was said before, the PS theory does not apply for such $D$. \FigureD4 Now, suppose $D$ is from Class HA. Then the model on $\bbH_2$ with a large fugacity has $2D^2/3$ EGMs if $D$ falls in Class TA1 and $4D^2/3$ if $D$ falls in Class TA2. See Theorems 7, 8 in Section \ref{SubSec3.5}. In Class HA, the PGSs stem from $D$-sub-lattices in ${\mathbb A}} \def\bbD{{\mathbb D}_2$. For HA1, the PGS are all horizontal if $D^2=9b^2$ or all vertical, if $D^2=3b^2$; for HA2 the PGSs are all inclined. We refer to $D$-admissible configurations on $\bbH_2$ constructed from a $D$-sub-lattice as $\alpha$-configurations or configurations of type $\alpha$, or -- when the value $D$ should be highlighted -- as $(D,\alpha )$-configurations (in short: $\alpha$-ACs or $(D,\alpha)$-ACs). We will also use the terms an $\alpha$-PGS and a $(D,\alpha )$-PGS. Summarizing, for Class HA we obtain a situation similar to Class TA on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. Cf. Figure 4. The picture for Class HB is analogous to that for Class TB. That is, only the dominant $\alpha$-PGSs give rise to EGMs, and the issue of dominance is resolved by counting local excitations. Cf. Theorem 9 in Section \ref{SubSec3.5}. As an example, we analyze the case $D^2=147$ and find out that on $\bbH_2$ there are 98 dominant vertical PGSs and 196 non-dominant inclined PGSs. Cf. Figure 5 and Theorem 10. Consequently, the number of $D$-EGMs for a large fugacity $u$ also equals $98$, and these EGMs inherit symmetries between the vertical PGSs. \FigureE5 A new situation arises for $D$ from Class HC. Here the PGSs stem from $D^*$-sub-lattices where $D^*>D$ is the nearest L\"oschian number divisible by $3$ (i.e., from Classes HA or HB). The minimal value for the difference $(D^*)^2-D^2$ equals 2 and is achieved when $D^2=3b^2+3b+1$ and $(D^*)^2=3b^2+3b+3$, for integer $n\geq 2$. (Here, for $n=1$ we obtain $D^2=3b^2+3b+1=7$ which yields a value with sliding.) If $D^*$ has type HA1, the number of $D$-EGMs in $\bbH_2$ equals $2(D^*)^2/3$ while if $D^*$ has type HA2, the number of $D$-EGMs equals $4(D^*)^2/3$. Moreover, the PGSs are $\alpha$-configurations and are obtained from each other by $\bbH_2$-shifts for $D^*$ from Class HA1 and by $\bbH_2$-shifts or reflections for $D^*$ from Class HA2. Cf. Figure 6. \FigureF6 For instance, if $D^2=19$ then $(D^*)^2=21$, and the number of the EGMs for $D^2=19$ equals $28$. On the other hand, for $D^2=43$ the value $(D^*)^2$ is $48$. Therefore, for $D^2=43$ the number of EGMs equals $32$. Cf. Theorem 11 in Section \ref{SubSec3.5}. If $D^*$ is a value of type HB then again the dominance analysis is needed to determine which PGSs generate EGMs. Finally, consider $D^2$ from Classes HD or HE. From now on we refer to the values of $D^2$ from these classes as {\it exceptional}. (In fact, these values are exceptions from Class HC.) It is convenient to divide Class HD into two sub-classes: HD1: $D^2=$ 1, 13, 28, 49, 64, 97, 157; HD2: $D^2=$ 16, 256. For $D=D^2=1$ we have a single PGS where all sites in $\bbH_2$ are occupied. There is just one EGM for all values of $u$ (not only for $u$ large), which is a Bernoulli random field over $\bbH_2$, with probability for a site being empty/vacant $1/(1+u)$ and occupied $u/(1+u)$. We will treat the case $D=1$ as trivial and omit it from the forthcoming discussions. \FigureG7 Take $D^2=13, 28, 49, 64, 97, 157$ (sub-class HD2) and write $D^2=a^2+b^2+ab$ where $a, b$ are non-negative integers. Then we have a particular structure of a PGS related to a quadrilateral $OABC$ in $\bbH_2$ where (i) two adjacent sides $OA$ and $OC$ have length $D$ and form an angle $2\pi/3$, (ii) two other sides $AB$ and $BC$ (also adjacent to each other) have $|AB|^2=D^2+2a+b+1$ and $|BC|^2=D^2-a+b+1$, and (iii) the shorter diagonal $OB$ has $|OB|^2=D^2 + a+2b+1$. We place particles at the vertices of such a quadrilateral and then extend this pattern to the whole of $\bbH_2$, generating a picture with intermittent stripes parallel to $OB$. Such configurations are called $\beta$-configurations or configurations of type $\beta$ (in short: $\beta$-ACs); these configurations are not constructed from sub-lattices. These configurations yield PGSs in Class HD1; accordingly, we refer to them as $\beta$-PGSs. Cf. Figure 7 where $D^2=13$. There are $66$ PGSs for $D^2=13$, $132$ for $D^2=28$, $222$ for $D^2=49$, $288$ for $D^2=64$, $426$ for $D^2=97$, and $678$ for $D^2=157$. For a large fugacity $u$, each PGS gives rise to a different $D$-EGM, and the number of the $D$-EGMs matches that of the $D$-PGSs. See Theorem 12(i) from Section \ref{SubSec3.6}. \FigureH8 Another particular PGS-structure arises for $D^2=$ 16, 256 (sub-class HD2). For these integers $D^2=a^2+b^2+ab$ where $a=0$, $b=D$. Then we take the value ${\overline} \def\ovp{{\overline p} D}=2D+1$ which is divisible by $3$. Hence, an equilateral triangle, $DEO$, can be inscribed in $\bbH_2$, with side-length $\overline} \def\ovp{{\overline p} D$ and a horizontal base $OE$. Now, consider an inner equilateral triangle $ABC$ with side-length $\left(D^2+D+1\right)^{1/2}$ inscribed in $DEO$: the vertices $A,B,C$ lie in the sides $OD$, $DE$ and $EO$ and divide them at the ratio $D:(D+1)$. Let us put particles at the vertices of $A,B,C$ and $D,E,O$. The PGSs for $D^2=$16, 256 arise from triangles congruent to $DEO$, each carrying the above particles, via the extension to the whole of $\bbH_2$. For this type of configurations we use the terms $\gamma$-configurations and $\gamma$-PGSs. Cf. Figure 8. There are $54$ $D$-PGSs for $D^2=16$ and 726 for $D^2=256$. As above, each PGS gives rise to a different EGM, and the number of the EGMs matches that of the PGSs. See Theorem 12(ii) from Section \ref{SubSec3.6}. \FigureI9 For $D^2=$ 67 (Class HE) we have a competition between two types of PGSs: (a) 50 PGSs as in Class HA ($\alpha$-configurations) with the squared exclusion diameter $75$ and (b) 300 PGSs as in class HD1 ($\beta$-configurations). This is formally stated in Theorem 13 in Section \ref{SubSec3.7}. See Figure 9. We conjecture that the $\alpha$-PGSs of type (a) are dominant. We see that, for values $D$ from exceptional classes HD and HE on $\bbH_2$, we have PGSs that are not generated from sub-lattices (apart from $D=1$), yet these cases do not lead to sliding. In contrast, on $\bbZ^2$, if for a given attainable $D$ there exists a non-lattice PGS then this value of $D$ exhibits sliding. \section{Formal preliminaries and basic facts}\label{Sec2} \subsection{The H-C model on ${\mathbb A}} \def\bbD{{\mathbb D}_2$}\label{SubSec2.1} We refer to a two-dimensional unit triangular lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ as the set of points ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} =(x_1;x_2)\in{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$ (sites of the lattice) with Euclidean co-ordinates \begin{equation}\label{triangularL} x_1=m-\displaystyle\frac{1}{2}n\;\hbox{ and }\;x_2=\displaystyle\frac{\sqrt 3}{2}n,\;\hbox{ where }\;m,n\in\bbZ.\end{equation} Every site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\mathbb A}} \def\bbD{{\mathbb D}_2$ has six neighboring sites ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}'$ such that the distance $\rho({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}')$ equals $1$. In future we write ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\mathbb A}} \def\bbD{{\mathbb D}_2$ for brevity. Alternatively to ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} =(x_1;x_2)$, we also write ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \simeq (m,n)\in{\mathbb A}} \def\bbD{{\mathbb D}_2$. Geometrically, points $(1;0)\simeq (1,0)$ and $(-1/2;{\sqrt 3}/2)\simeq (0,1)$ lead to a natural basis for ${\mathbb A}} \def\bbD{{\mathbb D}_2$. Here and below, $\rho (=\rho_2)$ stands for the 2D Euclidean metric: for ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} =(x_1;x_2),\by =(y_1;y_2)\in{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$, the distance $\rho ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ,\by)=\left[\rho_1(x_1,y_1)^2 +\rho_1(x_2,y_2)^2\right]^{1/2}$, where $\rho_1(x,y)=|y-x|$, $x,y\in{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}$. Alternatively, for ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \simeq (m,n),\by\simeq (u,v)\in{\mathbb A}} \def\bbD{{\mathbb D}_2$, $$\rho ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ,\by)^2=(m-u)^2+(n-v)^2-(m-u)(n-v).$$ Given $\bu\in{\mathbb A}} \def\bbD{{\mathbb D}_2$, we designate ${\tt T}_\bu:{\mathbb A}} \def\bbD{{\mathbb D}_2\to{\mathbb A}} \def\bbD{{\mathbb D}_2$ to be an ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shift by $\bu$, with ${\tt T}_\bu{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}={\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}+\bu$, ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\mathbb A}} \def\bbD{{\mathbb D}_2$. Similarly, \ $\tR:{\mathbb A}} \def\bbD{{\mathbb D}_2\to{\mathbb A}} \def\bbD{{\mathbb D}_2$ stands for the reflection map about the horizontal axis. Given a number $D\geq 1$, consider $D$-{\it admissible} configurations ($D$-ACs, or, in short, ACs): $$\phi_{{\mathbb A}} \def\bbD{{\mathbb D}_2}:{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\mathbb A}} \def\bbD{{\mathbb D}_2\mapsto\phi_{{\mathbb A}} \def\bbD{{\mathbb D}_2}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})\in\{0,1\}\;\hbox{ (or shortly $\phi:{\mathbb A}} \def\bbD{{\mathbb D}_2\to\{0,1\}^{{\mathbb A}} \def\bbD{{\mathbb D}_2}$)}$$ such that for any two {\it occupied} sites ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ and $\by$ with $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=\phi (\by)=1$ the distance $\rho ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ,\by)\geq D$. (We can think that $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=1$ means site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ is occupied in $\phi$ by a particle, and $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=0$ that ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ is vacant in $\phi$. Particles are treated as non-overlapped open disks of diameter $D$ with the centers placed at lattice sites.) We write $$\hbox{${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\phi$ if $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=1$ and identify $\phi$ with the subset in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ where $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=1$.}$$ The value $D$ is called as an H-C exclusion diameter. The set of admissible configurations is denoted by $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C=\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C(D,{\mathbb A}} \def\bbD{{\mathbb D}_2)$. As was said in Introduction, we can assume that $D^2$ is a L\"oschian number: \begin{equation}\label{Loesch} \hbox{$D^2\in\bbN$ \ and \ $D^2= a^2 + b^2 + ab$ \ where \ $a,b \in {\bbZ}$;} \end{equation} it means that $D$ is attainable, i.e., there are sites ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\by\in{\mathbb A}} \def\bbD{{\mathbb D}_2$ with $\rho ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\by )=D$. Assumption \eqref{Loesch} does not restrict generality, as any other $D' >1$ can be replaced by the smallest $D \ge D'$ satisfying \eqref{Loesch} without changing the set $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$. Set $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ is a closed subset in the Cartesian product $\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z:=\{0,1\}^{{\mathbb A}} \def\bbD{{\mathbb D}_2}$ (the set of all $0,1$-configurations) in the Tykhonov topology. For $D=1$, $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C =\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z$. The notion of an AC can be defined for any ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$; accordingly, one can use the notation $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})=\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$. The restriction of configuration $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ to ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ is denoted by $\phi\upharpoonright_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$. We are interested in some particular probability measures ${\mbox{\boldmath${\mu}$}}$ on $(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z,\fB(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z))$ sitting on $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ (i.e., such that ${\mbox{\boldmath${\mu}$}} (\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C )={\mbox{\boldmath${\mu}$}} (\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z)=1$) where $\fB(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z)$ is the Borel $\sigma$-algebra in $\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z$. As we repeatedly stressed, the measures of interest are extreme Gibbs/DLR, probability measures for high densities/large fugacities, which are formally defined below. Let ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$ be a finite set and $\phi\in{\mathcal A}$. We say that a finite configuration $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$ is $(\phi ,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$-compatible if the concatenated configuration $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\vee (\phi \upharpoonright_{{\mathbb A}} \def\bbD{{\mathbb D}_2\setminus{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}})\in{\mathcal A}$. The set of $(\phi ,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$-compatible configurations is denoted by $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )$. Given $u>0$, consider a probability measure $\mu_{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}(\;\cdot\; \|\phi )$ on $\{0,1\}^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ given by \begin{equation}\label{GibbsV} \mu_{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}(\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\,\|\phi )=\begin{cases}\displaystyle\frac{u^{\sharp (\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})-\sharp (\phi^{\,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}})}}{{\mathbf Z}({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )},&\hbox{if $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )$,}\\ 0,&\hbox{if $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\in\{0,1\}^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\setminus\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )$.}\end{cases}\end{equation} Here $\sharp (\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$ and $\sharp (\phi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$ stand for the number of particles in $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ and $\phi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$: $$\sharp (\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}):=\#\big\{x\in{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}:\;\psi (x)=1\big\},\;\;\sharp (\phi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}):=\#\big\{x\in{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}:\;\phi (x)=1\big\}.$$ Next, ${\mathbf Z}({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\;\|\phi )$ is the {\it partition function} in ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ with the boundary condition $\phi$\,: \begin{equation}\label{PartFnctnV}{\mathbf Z}({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )=\sum\limits_{\psi^{\,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )}u^{\sharp (\psi^{\,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}})-\sharp (\phi^{\,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}})}.\end{equation} Measure $\mu_{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}(\;\cdot\; \|\phi )$ sits on $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi )$. Parameter $u>0$ is called {\it fugacity} or {\it activity} (of an occupied site). A probability measure ${\mbox{\boldmath${\mu}$}}$ on $(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z,\fB(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z))$ is called a $D$-H-C {\it Gibbs/DLR measure} (in short, $D$-H-C GM or GM when the reference to $D$ can be omitted)\ if (i) ${\mbox{\boldmath${\mu}$}} ({\mathcal A})=1$, (ii) $\forall$ \ finite ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$ and a function $f:\phi\in \mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z\mapsto f(\phi )\in{\mathbb C}$ depending only on the restriction $\phi\upharpoonright_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$, the integral ${\mbox{\boldmath${\mu}$}} (f)=\int_\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z f(\phi ){\rm d}} \def\re{{\rm e}} \def\rn{{\rm n}{\mbox{\boldmath${\mu}$}} (\phi )$ has the form \begin{equation}\label{GibbsInt}\begin{array}{c}{\mbox{\boldmath${\mu}$}} (f)=\displaystyle\int_{\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z}\int_{\{0,1\}^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}} f(\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\vee\phi\upharpoonright_{{\mathbb A}} \def\bbD{{\mathbb D}_2\setminus{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}){\rm d}} \def\re{{\rm e}} \def\rn{{\rm n}\mu_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}(\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\, \|\phi ){\rm d}} \def\re{{\rm e}} \def\rn{{\rm n}{\mbox{\boldmath${\mu}$}} (\phi ).\end{array}\end{equation} One can say that under such measure ${\mbox{\boldmath${\mu}$}}$, the probability of a configuration $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ in a finite volume ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$, conditional on a configuration $\phi\upharpoonright_{{\mathbb A}} \def\bbD{{\mathbb D}_2\setminus{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}$, coincides with $\mu_{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}(\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\,\|\phi)$, for ${\mbox{\boldmath${\mu}$}}$-a.a. $\phi\in\{0,1\}^{\bbH_2}$. In the literature, equality \eqref{GibbsInt} is often referred to as the DLR equation for a measure ${\mbox{\boldmath${\mu}$}}$ (in fact, it represents a system of equations labeled by ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ and $f$). For the general theory of Gibbs measures, see the monograph \cite{Ge}, Chapters 3, 4, 5--8. The $D$-H-C GMs form a {\it Choquet simplex} (in the weak-convergence topology on the set of probability measures on $(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z,\fB(\mathscr X} \def\sY{\mathscr Y} \def\sZ{\mathscr Z))$), which we denote by $\sG=\sG(D,u,{\mathbb A}} \def\bbD{{\mathbb D}_2)$. An {\it extreme} $D$-H-C GM ${\mbox{\boldmath${\mu}$}}$ is a $D$-H-C GM which does not admit a non-trivial decomposition ${\mbox{\boldmath${\mu}$}} =a{\mbox{\boldmath${\mu}$}}^{(1)} +(1-a ){\mbox{\boldmath${\mu}$}}^{(2)}$ in terms of other $D$-H-C GMs ${\mbox{\boldmath${\mu}$}}^{(i)}$, $i=1,2$, with $a\in (0,1)$. As was said, the extreme $D$-H-C GMs ($D$-EGMs or briefly EGMs) represent pure phases. The collection of $D$-EGMs is denoted by $\sE (D)=\sE(D,u,{\mathbb A}} \def\bbD{{\mathbb D}_2)$. (Argument $u$ will be systematically omitted.) Any $D$-H-C Gibbs measure ${\mbox{\boldmath${\mu}$}}$ is a barycenter/mixture for some unit mass distribution over $\sE (D)$. \begin{remark} {\rm The simplest version of the partition function is $\Xi ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\varnothing )$, with an empty boundary condition: \begin{equation}\label{(1)} {\mathbf Z} ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\varnothing )=\sum\limits_{\psi^{\,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z} )}u^{\sharp (\psi^{\,{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}})}.\end{equation} Despite a straightforward (and appealing) form of $\Xi ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\varnothing )$, it is not always convenient (or at least not the most convenient) for the rigorous analysis in the {\it thermodynamic limit}, for a sequence of volumes ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}_k\nearrow{\mathbb A}} \def\bbD{{\mathbb D}_2$ in the Van Hove sense. The corresponding limit Gibbs measure (if it exists) depends on the particular shape of volumes ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}_k$ which can be in a `good' or `bad' agreement with {\it symmetries} of the hard-core model on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. In this paper we concentrate on the partition function $\Xi ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} )$ with a PGS boundary condition ${\varphi}$. We also analyze a periodic version of \eqref{(1)}.} $\blacktriangle$ \end{remark} A {\it ground state} ${\varphi}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D)$ in the H-C model with $u>1$ is defined by the property that one cannot remove finitely many particles from ${\varphi}$ and replace them by a larger number of particles without breaking $D$-admissibility. In other words, one cannot find a finite subset ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$ and a configuration $\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} )$ such that $\sharp\psi^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}>\sharp{\varphi}\upharpoonright_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$. A crucial role belongs to {\it periodic ground states} (PGSs). A $D$-AC $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ is said to be periodic if there exist two linearly independent vectors ${\mathbf e}} \def\bh{{\mathbf h}^{(1)},{\mathbf e}} \def\bh{{\mathbf h}^{(2)}\in{\mathbb A}} \def\bbD{{\mathbb D}_2$ such that $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}+{\mathbf e}} \def\bh{{\mathbf h}^{(i)})$ $\forall$ ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\mathbb A}} \def\bbD{{\mathbb D}_2$, $i=1,2$. In terms of ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts ${\tt T}_\bu$ it means that ${\tt T}_{{\mathbf e}} \def\bh{{\mathbf h}^{(i)}}\phi =\phi$ for $i=1,2$. The collection of PGSs for a given $D$ is denoted by $\mathscr P} \def\sR{\mathscr R(=\mathscr P} \def\sR{\mathscr R (D)=\mathscr P} \def\sR{\mathscr R (D,{\mathbb A}} \def\bbD{{\mathbb D}_2))$. The PGSs on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ are relatively straightforward and obtained from $D$-sub-lattices. Now we turn to arithmetic properties of a given $D$. Any ordered pair of integers $(a, b)$ which is a solution to equation \eqref{Loesch} defines a $D$-sub-lattice of ${\mathbb A}} \def\bbD{{\mathbb D}_2$ containing the origin and the following 6 sites: \begin{equation}\label{(10)} (-a,b),\; (b, a+b),\; (a+b, a),\; (a,-b),\; (-b, a+b),\; (a+b, -a),\end{equation} which all are the solutions to \eqref{Loesch} as ordered pairs of integers. If $ab=0$ or $a = b$ then the pair $(a, b)$ defines a single $D$-sub-lattice of ${\mathbb A}} \def\bbD{{\mathbb D}_2$ which is mapped into itself under the reflection $\tR$ (Class TA1). If $ab\neq 0$ and $a \neq b$ then the pair $(b, a)$ also defines a $D$-sub-lattice of ${\mathbb A}} \def\bbD{{\mathbb D}_2$ which is a reflection by $\tR$ of the sub-lattice defined by $(a, b)$ (Classes TA2 and TB). For each $D$-sub-lattice of ${\mathbb A}} \def\bbD{{\mathbb D}_2$ generated by a solution to \eqref{Loesch} there are exactly $D^2$ distinct ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts ${\tt T}_\bu$ as there are exactly $D^2$ lattice sites inside the fundamental parallelogram of the $D$-sub-lattice. All shifted configurations are PGSs. Moreover, all PGSs corresponding to a given $D$ are obtained as $\mathbb{A}_2$ shifts of $D$-sub-lattices generated by the solutions to \eqref{Loesch}. \subsection{The H-C model on $\bbH_2$}\label{SubSec2.2} Formally, $\bbH_2$ can be defined as the set-theoretical difference where we remove, from lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$, the sub-lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3})$ with a fundamental parallelogram $\{(0,0),(1,2),(2,1),(1,-1)\}$: \begin{equation}\label{bbH2a}\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} \bbH_2={\mathbb A}} \def\bbD{{\mathbb D}_2\setminus{\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3}),\\ \quad \hbox{ where }\;\displaystyle{\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3})=\Big\{m\cdot (1,2) +n\cdot (2,1):\;m,n\in\bbZ \Big\}.\end{array}\end{equation} Equivalently, \begin{equation}\label{bbH2b}\bbH_2={\tt T}_{(1,0)}{\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3})\cup{\tt T}_{(0,1)}{\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3}).\end{equation} Each site in $\bbH_2$ has three neighboring sites, at the Euclidean distance $1$. Lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ is represented as the union of three disjoint congruent subsets: ${\mathbb A}} \def\bbD{{\mathbb D}_2={\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3})\cup {\tt T}_{(1,0)}{\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3})\cup{\tt T}_{(0,1)}{\mathbb A}} \def\bbD{{\mathbb D}_2({\sqrt 3})$. If $D^2\equiv 0 \mod 3$ then all 3 vertices of an equilateral ${\mathbb A}} \def\bbD{{\mathbb D}_2$-triangle $\triangle$ with the side-length $D$ lie in the same subset. Otherwise $D^2\equiv 1 \mod 3$, and all vertices of $\triangle$ lie in different subset. Hence, for every L\"oschian number $D^2$ there are pairs of vertices ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}'\in\bbH_2$ for which $\rho ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}')=D$. Lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ is represented as a non-disjoint union \begin{equation}\label{bbA2union}{\mathbb A}} \def\bbD{{\mathbb D}_2=\bbH_2\cup\left({\tt T}_{(-1;0)}\bbH_2\right) =\bbH_2\cup\left({\tt T}_{(1;0)}\bbH_2\right).\end{equation} As above, we use the notation ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}=(x;x')\in\bbH_2$ and ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \simeq (m,n)\in\bbH_2$. We use the term an $\bbH_2$-shift for any ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shift ${\tt T}_\bu$ where $\bu \simeq (m,n)$ has both $u_1,u_2$ divisible by 3. Also, $\tR$ stands for the reflection about the horizontal axis: $\tR{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} = (n-m,-m)$ for ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} =(m,n)\in\bbH_2$. The definitions of admissible configurations, compatibility, partition functions, Gibbs measures and extreme Gibbs measures on $\bbH_2$ are similar to those on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, and we do not repeat them. We also continue using a similar notation $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C=\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D, \bbH_2)$, $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})=\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D, {\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$, $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|\phi)$. The definition of a ground state and a periodic ground state on $\bbH_2$ are direct repetitions of their counterparts on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. As in the case of ${\mathbb A}} \def\bbD{{\mathbb D}_2$, the crucial notion is a periodic ground state (PGS). However, on $\bbH_2$ a PGS is not necessarily obtained from a sub-lattice (although it is the case for Classes HA, HB and HC). The set of PGSs for a given value $D$ is again denoted by $\mathscr P} \def\sR{\mathscr R (D)=\mathscr P} \def\sR{\mathscr R (D,\bbH_2)$ and that of EGMs by $\sE (D)=\sE (D,u,\bbH_2)$, respectively. (As above, argument $u$ will be systematically omitted.) \section{Main theorems}\label{Sec3} \subsection{Templates. Contour definitions}\label{SubSec3.1} First, let us consider the case of ${\mathbb A}} \def\bbD{{\mathbb D}_2$. For a given $D$, {\it templates} $F_{k,l}=F^{{\mathbb A}} \def\bbD{{\mathbb D}_2}_{k,l}$ are defined by \begin{equation}\label{3.1}F_{k,l} := \{ (m, n) \in {\mathbb A}} \def\bbD{{\mathbb D}_2:\; kD^2 \le m < (k+1)D^2,\; lD^2 \le n < (l+1)D^2\},\; k,l \in {\bbZ}. \end{equation} Each template contains $D^4$ points. Note that sites $(kD^2,lD^2)$ form a sub-lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2(D^2)$, and all $D$-PGSs are periodic relative to it. The family $\{F_{k,l}\}$ forms a partition of ${\mathbb A}} \def\bbD{{\mathbb D}_2$. The template $F_{0,0}$, treated as a $D^2 \times D^2$-torus, is partitioned into $D^2$ rhombuses, one partition for each PGS-equivalence class. We frequently omit the indices $k, l$ in the notation for a template when their values are not important or are evident from the context. Figure 10 shows examples of templates. \FigureJ10 In what follows, we suppose that volume ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$ is a finite union of templates; such a set ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ is called a {\it basic lattice polygon} (briefly, a basic polygon). Given a PGS ${\varphi}\in\mathscr P} \def\sR{\mathscr R$ and a basic polygon ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\subset{\mathbb A}} \def\bbD{{\mathbb D}_2$, the partition function ${\mathbf Z} ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi})$ in \eqref{PartFnctnV} gives rise to a Gibbs probability distribution $\mu_{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}(\;\cdot\; \|\phi )$ on $\{0,1\}^{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ concentrated on $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi})$. We say a PGS ${\varphi}\in\mathscr P} \def\sR{\mathscr R$ {\it generates} a GM ${\mbox{\boldmath${\mu}$}}_{\varphi}$ if, \ $\forall$ \ sequence of basic polygons ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}_k\nearrow{\mathbb A}} \def\bbD{{\mathbb D}_2$ satisfying the Van Hove condition, \begin{equation}\label{gener}{\mbox{\boldmath${\mu}$}}_{\varphi} =\lim\limits_{k\to\infty}\mu_{{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}_k}(\;\cdot\; \|{\varphi} ).\end{equation} Equivalently, we say that ${\mbox{\boldmath${\mu}$}}_{\varphi}$ is {\it generated} by ${\varphi}$. A specific construction of a GM exploits periodic boundary conditions, in toric volumes ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}_k=\bbT (k)$, $k = 1, 2,\ldots$ Here $\bbT (k)=\bbT^{{\mathbb A}} \def\bbD{{\mathbb D}_2} (k)$ is given by \begin{equation}\label{(9.2)}\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r}\bbT (k) = \{ (m, n) \in {\mathbb A}} \def\bbD{{\mathbb D}_2:\; -kD^2 \le m < kD^2,\; -kD^2 \le n < kD^2;\;\hbox{ with}\\ \qquad\quad\hbox{identification }\; (kD^2, n) \equiv (-kD^2 , n)\;\hbox{ and }\;(m,kD^2)\equiv(m,-kD^2)\}. \end{array}\end{equation} To determine the admissible configurations in a torus we use the condition that $\rho^{(k)}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\by)\geq D$. Here the metric $\rho^{(k)}$ is the toric metric on $\bbT (k)$ defined by $$\rho^{(k)}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\by)=\left[\rho^{(k)}_1(x_1,y_1)^2 + \rho^{(k)}_1(x_2,y_2)^2\right]^{1/2},$$ where ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}=(x_1,x_2)$, $\by=(y_1,y_2)$. In turn, $\rho^{(k)}_1$ is a metric on the interval $[-kD^2,kD^2)$, with $\rho^{(k)}_1(x,y)=\min\,\{y-x,x+2kD^2-y\}$ for $-kD^2 \leq x\le y<kD^2$. The set of admissible configurations in $\bbT (k)$ is denoted by $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C_{\rm{per},k}=\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C_{\rm{per},k}(\bbT (k))$. In the same spirit as \eqref{(1)}, the partition function in $\bbT (k)$ with periodic boundary condition is determined by \begin{equation}\label{(101)}{\mathbf Z}_{\rm{per}}(\bbT (k))=\sum\limits_{\phi_{\bbT (k)}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C_{\rm{per},k}}\;\prod_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\bbT_k} u^{\phi_{\bbT(k)}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} )}.\end{equation} This in turn defines the Gibbs distribution $\mu_{{\rm{per}},k}$. Next, we set \begin{equation}\label{(110)}{\mbox{\boldmath${\mu}$}}_{\rm{per}}=\lim\limits_{k\to\infty}\mu_{{\rm{per}},k}\end{equation} provided that the limit measure exists. \FigureK11 The concept of a template can be extended without changes to the case of $\bbH_2$ when $D^2$ is from Classes HA, HB or HC (where the PGSs are $\alpha$-configurations). Here templates $F_{k,l}=F^{\bbH_2}_{k,l}$ are defined by \eqref{3.1} with the requirement $(m,n)\in\bbH_2$; in other words, $F^{\bbH_2}_{k,l}=F^{{\mathbb A}} \def\bbD{{\mathbb D}_2}_{k,l}\cap\bbH_2$. (Of course, in Class HC the number $D$ has to be replaced by $D^*$.) Then the torus $\bbT (k)=\bbT^{\bbH_2} (k)$ is introduced, as $\bbT^{\bbH_2} (k)=\bbT^{{\mathbb A}} \def\bbD{{\mathbb D}_2} (k)\cap\bbH_2$ where $\bbT^{{\mathbb A}} \def\bbD{{\mathbb D}_2} (k)$ is defined in \eqref{(9.2)}. The set $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C_{\rm{per},k}$ and measures $\mu_{{\rm{per}},k}$ and ${\mbox{\boldmath${\mu}$}}_{\rm{per}}$ are defined as above: see \eqref{(101)} and \eqref{(110)}. For the exceptional values $D^2$, the above construction needs the following modifications. For each PGS ${\varphi}$ we have a period parallelogram $\Pi ({\varphi} )$ with sides ${\mathbf e}} \def\bh{{\mathbf h}^{(1)}({\varphi} ) =\left(e^{(1)}_1({\varphi} ),e^{(1)}_2({\varphi} )\right)$ and ${\mathbf e}} \def\bh{{\mathbf h}^{(2)}({\varphi} )=\left(e^{(2)}_1({\varphi} ),e^{(2)}_2({\varphi} )\right)$ where ${\tt T}_{{\mathbf e}} \def\bh{{\mathbf h}^{(i)}({\varphi} )}{\varphi} ={\varphi}$, $i=1,2$: $$\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r}\Pi ({\varphi} )=\Big\{(m,n)\in\bbH_2:\, (m,n)={\epsilon}} \def\veps{{\varepsilon}_1{\mathbf e}} \def\bh{{\mathbf h}^{(1)}({\varphi} )+{\epsilon}} \def\veps{{\varepsilon}_2{\mathbf e}} \def\bh{{\mathbf h}^{(2)}({\varphi} )\;\hbox{ for $0\leq{\epsilon}} \def\veps{{\varepsilon}_i<1$, $i=1,2$}\Big\}.\end{array}$$ Template $F_{0,0}=F^{\bbH_2}_{0,0}$ for an exceptional $D^2$ can be defined as a parallelogram \begin{equation}\label{ExceTempl}\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} F_{0,0}=\Big\{(m,n)\in\bbH_2:\, (m,n)={\epsilon}} \def\veps{{\varepsilon}_1{\overline} \def\ovp{{\overline p}{\mathbf e}} \def\bh{{\mathbf h}}^{(1)}(D)+{\epsilon}} \def\veps{{\varepsilon}_2{\overline} \def\ovp{{\overline p}{\mathbf e}} \def\bh{{\mathbf h}}^{(2)}(D)\;\hbox{ for $0\leq{\epsilon}} \def\veps{{\varepsilon}_i<1$, $i=1,2$}\Big\}.\end{array}\end{equation} Here ${\overline} \def\ovp{{\overline p}{\mathbf e}} \def\bh{{\mathbf h}}^{(i)}(D)=\left({\overline} \def\ovp{{\overline p} e}^{(i)}_1(D),{\overline} \def\ovp{{\overline p} e}^{(i)}_2(D)\right)$, ${\overline} \def\ovp{{\overline p} e}^{(i)}_j(D) ={\rm{LCM}}\{e^{(i)}_j({\varphi} ):\,{\varphi}\in\mathscr P} \def\sR{\mathscr R (D)\}$, $i,j=1,2$. Finally, we set $F_{k,l}={\tt T}_{k{\overline} \def\ovp{{\overline p}{\mathbf e}} \def\bh{{\mathbf h}}^{(1)}(D)+l{\overline} \def\ovp{{\overline p}{\mathbf e}} \def\bh{{\mathbf h}}^{(2)}(D)}F_{0,0}$, $k,l\in\bbZ$, to form a partition of $\bbH_2$. Pictorially, each PGS ${\varphi}$ is periodic relative to the sub-lattice defined by vectors ${\mathbf e}} \def\bh{{\mathbf h}^{(i)}({\varphi} )$, $i=1,2$. Template $F_{0,0}=F^{\bbH_2}_{0,0}$ is the fundamental parallelogram for the lattice, that is, the intersection of the above lattices for all ${\varphi}\in\mathscr P} \def\sR{\mathscr R (D)$. Let ${\varphi}\in\mathscr P} \def\sR{\mathscr R(D)$ be a $D$-PGS and $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C(D)$ be an admissible configuration, on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$. Following the definition of correctness on P. 561 in \cite{Za} we say that a template $F_{k, l}$ is {\it ${\varphi}$-correct} in $\phi$ if $\phi ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})={\varphi} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})$ for every site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ lying in 9 templates $F_{k + i, l + j}$, where $i, j = -1, 0, 1$. The 9 templates include the initial template $F_{k, l}$ and 8 neighboring templates considered as connected to $F_{k, l}$. Cf. Figure 11. A {\it contour support} in a configuration $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ is defined as a connected component of the union of templates which are not ${\varphi}$-correct in $\phi$ for any ${\varphi}\in\mathscr P} \def\sR{\mathscr R$. A {\it contour} in $\phi$ is defined as a pair \begin{equation}\label{Gamma}{\Gamma}} \def\gam{{\gamma} =\left(\rSp\,({\Gamma}} \def\gam{{\gamma} ), \phi\upharpoonright_{\rSp\,({\Gamma}} \def\gam{{\gamma} )}\right)\end{equation} consisting of a contour support $\rSp\,({\Gamma}} \def\gam{{\gamma} )$ and the restriction $\phi\upharpoonright_{\rSp\,({\Gamma}} \def\gam{{\gamma} )}$. These definitions are specifications, for the H-C model, of general definitions on P. 561 in \cite{Za}. Accordingly, we define sets ${\rm{Int}}\,({\Gamma}} \def\gam{{\gamma} )$, $\qquad\qquad {\rm{Int}}_{\varphi} ({\Gamma}} \def\gam{{\gamma} )$ and ${\rm{Ext}}\,({\Gamma}} \def\gam{{\gamma} )$ by using Eqn (1.5) from \cite{Za}. \FigureL12 Inside each of ${\rm{Ext}}\,({\Gamma}} \def\gam{{\gamma} )$, $\rSp\,({\Gamma}} \def\gam{{\gamma} )$ and ${\rm{Int}}_{\varphi} ({\Gamma}} \def\gam{{\gamma} )$ we can specify a {\it boundary layer}: it is a connected set of templates where each parallelogram has a neighboring template outside the corresponding ${\rm{Ext}}{\Gamma}} \def\gam{{\gamma}$, $\rSp\,({\Gamma}} \def\gam{{\gamma} )$, or ${\rm{Int}}_{\varphi}{\Gamma}} \def\gam{{\gamma}$. Each of ${\rm{Ext}}\,({\Gamma}} \def\gam{{\gamma} )$ and ${\rm{Int}}_{\varphi} ({\Gamma}} \def\gam{{\gamma} )$ has a single corresponding boundary layer while $\rSp\,({\Gamma}} \def\gam{{\gamma} )$ has several of them. Every boundary layer in $\rSp\,({\Gamma}} \def\gam{{\gamma} )$ has a corresponding (dual) boundary layer inside ${\rm{Ext}}\,({\Gamma}} \def\gam{{\gamma} )$ or ${\rm{Int}}_{\varphi}({\Gamma}} \def\gam{{\gamma} )$. Moreover, in every boundary layer all occupied sites belong to the same ${\varphi}$ (which justifies the notation ${\rm{Int}}_{\varphi} ({\Gamma}} \def\gam{{\gamma} )$). Finally, following Eqn (1.5) from \cite{Za}, a contour for which a boundary layer of ${\rm{Ext}}\, ({\Gamma}} \def\gam{{\gamma} )$ belongs to the PGS ${\varphi}$ is called a ${\varphi}$-{\it contour}. See Figure 12. Physically speaking, a ${\varphi}$-contour emerges when we add to ${\varphi}$ an amount of particles at some `inserted' sites and simultaneously remove the particles from ${\varphi}$ which are `repelled' by the inserted particles. The latter will be referred to as removed sites/particles. The whole procedure should of course maintain admissibility. In fact, for any attainable $D^2$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and any attainable $D^2\neq 4, 7, 31, 133$ on $\bbH_2$ for $u$ large enough, every EGM ${\mbox{\boldmath${\mu}$}}$ has the property that, with ${\mbox{\boldmath${\mu}$}}$-probability $1$ the AC $\phi$ has no infinite contours. Cf. Theorem III(iv) in Section 3.2. Let ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ be a basic polygon and ${\varphi}\in\mathscr P} \def\sR{\mathscr R$ be a PGS. Then the partition function ${\mathbf Z} ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} )$ can be written in the form \begin{equation}\label{PFthruCs} {\mathbf Z} ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} )=\sum_{\{{\Gamma}} \def\gam{{\gamma}_i\}\;{\rm{in}}\;{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}}\prod\limits_iw ({\Gamma}} \def\gam{{\gamma}_i) \end{equation} Here and below, $w({\Gamma}} \def\gam{{\gamma} )$ stands for the statistical {\it weight} of contour ${\Gamma}} \def\gam{{\gamma}$: \begin{equation}\label{SWoC} w ({\Gamma}} \def\gam{{\gamma} )=u^{\sharp (\psi_{\Gamma}} \def\gam{{\gamma} )-\sharp ({\varphi}_{\Gamma}} \def\gam{{\gamma} )}. \end{equation} Further, the summation in Eqn \eqref{PFthruCs} is extended to collections of contours ${\Gamma}} \def\gam{{\gamma}_i$ in ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ compatible in the sense of the (general) PS theory; cf. \cite{Si}, \cite{Za}. Here we say that ${\Gamma}} \def\gam{{\gamma}$ is a contour in ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ if the set $\rSp\,({\Gamma}} \def\gam{{\gamma} )\setminus{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ is empty or lies in the boundary layer of Ext $({\Gamma}} \def\gam{{\gamma} )$. \subsection{Pirogov--Sinai theory for the H-C model}\label{SubSec3.2} In Theorems I and II below we summarize our results on PGSs and EGMs on both lattices, ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$. These theorems form a prerequisite for the use of the PS theory. Applying the PS theory, we obtain Theorem III which holds true for all Classes of $D$ except for HS. In Theorems 1--13 we identify the PGSs and the EGMs for the respective Classes of values of $D$. \begin{thmI}\label{ThmI} \begin{description} \item[(i)] For any attainable $D>1$, the set $\mathscr P} \def\sR{\mathscr R (D,{\mathbb A}} \def\bbD{{\mathbb D}_2 )$ consists of $D$-sub-lattices and their shifts and reflections. In particular, set $\mathscr P} \def\sR{\mathscr R (D,{\mathbb A}} \def\bbD{{\mathbb D}_2 )$ is finite. \item[(ii)] For any attainable non-sliding $D>1$, the set $\mathscr P} \def\sR{\mathscr R (D,\bbH_2 )$ is finite. If $D^2$ is from Classes {\rm{HA}} or {\rm{HB}} then $\mathscr P} \def\sR{\mathscr R (D,\bbH_2 )$ consists of $(\alpha, D)$-configurations. If $D^2$ is from Class {\rm{HC}} then $\mathscr P} \def\sR{\mathscr R (D,\bbH_2 )$ consists of $(\alpha, D^*)$-configurations. If $D^2$ is from Class {\rm{HD1}} then $\mathscr P} \def\sR{\mathscr R (D,\bbH_2 )$ consists of $(\beta, D)$-configurations. If $D^2$ is from Class {\rm{HD2}} then $\mathscr P} \def\sR{\mathscr R (D,\bbH_2 )$ consists of $(\gamma, D)$-configurations. For $D^2=67$ (Class {\rm{HE}}), set $\mathscr P} \def\sR{\mathscr R (D,\bbH_2 )$ consists of $(\beta, D)$- and $(\alpha, D^*)$-configurations where $(D^*)^2=75$. \end{description} \end{thmI} The proof of Theorem I involves the material of Section \ref{Sec4} and completed in Section \ref{SubSec4.6}. Let us define \begin{equation}\label{SoD}\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} S=S(D)=D^2{\sqrt 3}/2=2\times\Big(\hbox{the area of a $D$-triangle}\Big).\end{array}\end{equation} Next, for exceptional non-sliding values $D^2 = 13, 16, 28, 49, 64, 67, 97,157, 256$ on $\bbH_2$ we set: \begin{equation}\beac S^{\rR\rD} (\sqrt{13})=16.5 {\sqrt 3}/2,\;S^{\rR\rD} (\sqrt{16})=20.25{\sqrt 3}/2,\;S^{\rR\rD} (\sqrt{28})=33{\sqrt 3}/2,\\ S^{\rR\rD} (\sqrt{49}) =55.5{\sqrt 3}/2,\;\; S^{\rR\rD} (\sqrt{64}) =72{\sqrt 3}/2, \;\;S^{\rR\rD} (\sqrt{67}) = 75{\sqrt 3}/2, \\ S^{\rR\rD} (\sqrt{97}) = 106.5{\sqrt 3}/2,S^{\rR\rD} (\sqrt{157}) = 169.5{\sqrt 3}/2,S^{\rR\rD} (\sqrt{256}) = 272.25 {\sqrt 3}/2.\end{array}\end{equation} Here the notation $S^{\rR\rD}(D)$ refers to a minimal re-distributed triangle area for a given value $D$ from the above list. The general concept of a re-distributed area will be introduced in Section \ref{SubSec4.2}. The Peierls bound is established in Theorem II below. It refers to the quantity \begin{equation}\label{NoTs}\|{\rm Supp}({\Gamma}} \def\gam{{\gamma}) \|:=\hbox{the number of (incorrect) templates in}\;{\rm Supp}({\Gamma}} \def\gam{{\gamma}).\end{equation} \begin{thmII} \label{ThmII} (The Peierls bound for contours) The weight $w({\Gamma}} \def\gam{{\gamma} )$ of contour \newline ${\Gamma}} \def\gam{{\gamma} =\left(\rSp\,({\Gamma}} \def\gam{{\gamma} ), \phi\upharpoonright_{\rSp\,({\Gamma}} \def\gam{{\gamma} )}\right)$ obeys the bound \begin{equation}\label{PeierlsB} \qquad\qquad\qquad w({\Gamma}} \def\gam{{\gamma}) \le u^{- p\|\rSp\;({\Gamma}} \def\gam{{\gamma} )\|}.\end{equation} Here $p >0$ (the Peierls constant) satisfies $\bullet$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$: \begin{equation}\label{pon1} p=p(D,{\mathbb A}} \def\bbD{{\mathbb D}_2)\geq {\sqrt 3}/(288S(D)),\end{equation} $\bullet$ on $\bbH_2$: \begin{equation}\label{pon2} p=p(D,\bbH_2)\geq {\sqrt 3}/(288S(D)),\;\hbox{ if $D^2$ falls in Class {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C} \ or {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}\rB,}\end{equation} \begin{equation}\label{pon3}\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} p=p(D,\bbH_2)\geq {\sqrt 3}/(288S(D^*)),\;\hbox{if $D^2$ falls in Class {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}{\rm C}, where $(D^*)^2$}\\ \qquad\hbox{is the closest L\"oschian number with $(D^*)^2>D^2$ such that $3\big|(D^*)^2$,}\end{array} \qquad\end{equation} \begin{equation}\label{pon4} \begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} p=p(D,\bbH_2)\geq {\sqrt 3}/(288S^{\rR\rD}(D))\;\hbox{ for $D^2 = 13, 16, 28, 49, 64, 67 ,97,$}\\ \qquad \hbox{$157, 256$} \quad (\hbox{Classes {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}{\rm D}} \def\rE{{\rm E}} \def\rF{{\rm F}, {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}\rE}).\end{array} \end{equation} \end{thmII} \begin{ra} {\rm The bounds upon $p$ in Eqns \eqref{pon1}--\eqref{pon4} are far from optimal and have been selected in a universal form for simplicity. The value $p$ can be improved at an expense of additional technicalities.} $\blacktriangle$ \end{ra} The proof of Theorem II starts in Section \ref{Sec4} and is completed in Section \ref{SubSec5.1}. \begin{thmIII} \label{ThmIII} For all $D$ exists a value $u_0(D,{\mathbb A}} \def\bbD{{\mathbb D}_2)\in (0,\infty )$, and for all $D\neq 4, 7, 31, 133$ exists a value $u_0(D,\bbH_2)\in (0,\infty )$ such that, for $u\geq u_0 (D,\,\cdot\,)$, on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$, respectively, the following assertions hold true. \begin{description} \item[{\rm{(i)}}] Each {\rm{EGM}} ${\mbox{\boldmath${\mu}$}}\in\sE (D)$ is generated by a {\rm{PGS}}. That is, each {\rm{EGM}} is of the form ${\mbox{\boldmath${\mu}$}}_{\varphi}$ for some ${\varphi}\in\mathscr P} \def\sR{\mathscr R (D)$. If {\rm{PGS}}s ${\varphi}_i$ generate {\rm{EGM}}s ${\mbox{\boldmath${\mu}$}}_{{\varphi}_i}$, $i=1,2$, and ${\varphi}_1\neq{\varphi}_2$ then ${\mbox{\boldmath${\mu}$}}_{{\varphi}_1}\perp {\mbox{\boldmath${\mu}$}}_{{\varphi}_2}$. The {\rm{EGM}}s inherit symmetries between the {\rm{PGS}}s: if {\rm{PGS}}s ${\varphi}_i$ generate {\rm{EGM}}s ${\mbox{\boldmath${\mu}$}}_{{\varphi}_i}$, $i=1,2$, and ${\varphi}_1={\tt T}_{\bu}{\varphi}_2$ or ${\varphi}_1=\tR{\varphi}_2$ then ${\mbox{\boldmath${\mu}$}}_{{\varphi}_1}={\tt T}_{\bu}{\mbox{\boldmath${\mu}$}}_{{\varphi}_2}$ or ${\mbox{\boldmath${\mu}$}}_{{\varphi}_1}=\tR{\mbox{\boldmath${\mu}$}}_{{\varphi}_2}$, respectively. \item[{\rm{(ii)}}] {\rm{EGM}}-generation is a class property: if a ${\varphi}\in\mathscr P} \def\sR{\mathscr R (D)$ generates an EGM ${\mbox{\boldmath${\mu}$}}_{\varphi}$ then every {\rm{PGS}} $\widetilde} \def\wh{\wideha{\varphi}$ from the same {\rm{PGS}}-equivalence class generates an EGM ${\mbox{\boldmath${\mu}$}}_{\widetilde} \def\wh{\wideha{\varphi}}$. Such a class is referred to as dominant. If an equivalence class is unique, it is dominant. \item[{\rm{(iii)}}] Measure ${\mbox{\boldmath${\mu}$}}_{\rm{per}}$ exists and is a uniform mixture of the measures ${\mbox{\boldmath${\mu}$}}_{\varphi}$ where ${\varphi}$ runs through all dominant {\rm{PGS}}-equivalence classes. If there is a single equivalence class then ${\mbox{\boldmath${\mu}$}}_{\rm{per}}$ is a uniform mixture of all measures ${\mbox{\boldmath${\mu}$}}_{\varphi}$ where ${\varphi}\in\mathscr P} \def\sR{\mathscr R$. \item[{\rm{(iv)}}] Each {\rm{EGM}} ${\mbox{\boldmath${\mu}$}}_{\varphi}$ exhibit the following properties. For ${\mbox{\boldmath${\mu}$}}_{\varphi}$-almost all $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$: \begin{description} \item[{\rm{(A)}}] All contours ${\Gamma}} \def\gam{{\gamma}$ in $\phi$ are finite. \item[{\rm{(B)}}] For any site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ there exist only finitely many contours ${\Gamma}} \def\gam{{\gamma}$ (possibly none) such that ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\rm{Int}} ({\Gamma}} \def\gam{{\gamma} )\cup\rSp\,({\Gamma}} \def\gam{{\gamma} )$. \item[{\rm{(C)}}] There are countably many disjoint connected sets of ${\varphi}$-correct templates one of which is infinite and all remaining ones are finite. \item[{\rm{(D)}}] For any ${\varphi}'\in\mathscr P} \def\sR{\mathscr R\setminus\{{\varphi}\}$, there are countably many disjoint connected sets of ${\varphi}'$-correct templates, and they all are finite. \end{description} \item[{\rm{(v)}}] Measure ${\mbox{\boldmath${\mu}$}}_{\varphi}$ admits a polymer expansion and consequently has an exponential decay of correlations. \item[{\rm{(vi)}}] As $u\to\infty$, measure ${\mbox{\boldmath${\mu}$}}_{\varphi}$ converges weakly to a measure sitting on a single {\rm{AC}} ${\varphi}$. \end{description} \end{thmIII} \begin{proofIII}(i, ii) According to Corollary on P. 565 in \cite{Za}, there exists at least one PGS ${\varphi}\in\mathscr P} \def\sR{\mathscr R (D)$ which generates an EGM ${\mbox{\boldmath${\mu}$}}_{\varphi}\in\sE (D)$. It is obvious that every PGS $\widetilde} \def\wh{\wideha{\varphi}$ from the same equivalence class generates an EGM ${\mbox{\boldmath${\mu}$}}_{\widetilde} \def\wh{\wideha{\varphi}}$ and that EGMs ${\mbox{\boldmath${\mu}$}}_{\varphi}$, ${\mbox{\boldmath${\mu}$}}_{\widetilde} \def\wh{\wideha{\varphi}}$ are related with the same symmetry as PGSs ${\varphi}$, $\widetilde} \def\wh{\wideha{\varphi}$. The fact that each EGM ${\mbox{\boldmath${\mu}$}}$ is generated by a PGS follows from Corollary on P. 578 in \cite{Za}, completed with Theorem 1 from \cite{DoS}. The mutual singularity of measures ${\mbox{\boldmath${\mu}$}}_{{\varphi}_1}$ and ${\mbox{\boldmath${\mu}$}}_{{\varphi}_2}$ for ${\varphi}_1\neq{\varphi}_2$ can be deduced from assertion (iv) which is deduced from \cite{Za} below. Passing to assertion (iii), the main concern is the existence of contours {\it winding} around the torus $\bbT_k$. However, the $\mu_{{\rm{per}},k}$-probability of the event $\cW_k\subset\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C_{{\rm{per}},k}$ that such contour is present in an admissible configuration $\phi_{\bbT^{(1)}_k}$ becomes negligible as $k\to\infty$, as winding contours are too large. On the remaining event, ${\overline} \def\ovp{{\overline p}\cW}_k =\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C_{\rm{per},k}\setminus\cW_k$, the statistics of the random configuration is described in terms of the ensemble of {\it external contours}. Furthermore, event ${\overline} \def\ovp{{\overline p}\cW}_k$ can be partitioned into $\sharp \sE$ parts, ${\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{\varphi}}$, ${\mbox{\boldmath${\mu}$}}_{\varphi}\in\sE$, so that on ${\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{\varphi}}$ all external contours are ${\varphi}$-contours, and each ${\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{\varphi}}$ will have the same limit probability: $\lim\limits_{k\to\infty} \mu_{{\rm{per}},k}\left({\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{\varphi}}\right)= \displaystyle\frac{1}{\sharp \sE}$. $\bigg($Here we use the property that the ratio $\displaystyle\frac{\mu_{{\rm{per}},k}\left({\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{\varphi}}\right)}{ \mu_{{\rm{per}},k}\left({\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{{\varphi}'}}\right)}$ tends to $1$ as $k\to\infty$ for any choice of EGMs ${\mbox{\boldmath${\mu}$}}_{\varphi},{\mbox{\boldmath${\mu}$}}_{{\varphi}'}\in\sE$. This follows from the contour representation for the sum $\sum\limits_{\phi_{\bbT (k)}\in{\overline} \def\ovp{{\overline p}\cW}_{k,\mu_{\varphi}}}\;\prod\limits_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\bbT_k} u^{\phi_{\bbT(k)}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} )}={\mathbf Z}_{\rm{per}}(\bbT (k))\mu_{{\rm{per}},k}\left({\overline} \def\ovp{{\overline p}\cW}_{k,{\mbox{\boldmath${\mu}$}}_{\varphi}}\right)$.$\bigg)$ This argument leads to the formula $\lim\limits_{k\to\infty}\mu_{{\rm{per}},k}={\displaystyle\frac{1}{\sharp \sE}}\sum\limits_{{\mbox{\boldmath${\mu}$}}_{\varphi}\in\sE}{\mbox{\boldmath${\mu}$}}_{\varphi}$. (iv) Statements (A, B) follow from the fact that in an EGM ${\mbox{\boldmath${\mu}$}}_{\varphi}$, the probability of a contour ${\Gamma}} \def\gam{{\gamma}$ is $\leq u^{\|{\rm Supp}({\Gamma}} \def\gam{{\gamma} )\| p(D)/3}$. Cf. \cite{Za}, Theorem on P. 564. In turn, (C, D) follow from (A, B). Statements (v, vi) follow from \cite{Za}, Theorem on P. 564. \end{proofIII} In assertions Theorems 1--13 below we work under the condition $u >u_0(D,\,\cdot\,)$ assumed in Theorem III. \subsection{PGSs and EGMs for Class TA}\label{SubSec3.3} Here and below, we say that a sub-lattice in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ has type $(a,b)$, or is an $(a,b)$-sub-lattice when it is a $D$-sub-lattice generated by the sites $(0,0)$ and $(a+b,-b)$, where $a,b\in\bbZ$, $D^2=a^2+b^2+ab$. If $ab=0$, the corresponding $D$-sub-lattice is horizontal; if $a=b$, it is vertical. In cases where $ab\neq 0$ and $a\neq b$, we have inclined $D$-sub-lattices. Correspondingly, a PGS-equivalence class is called an $(a,b)$-class or class $(a,b)$ if it contains a sub-lattice of type $(a,b)$. We denote this class by $\mathscr P} \def\sR{\mathscr R(a,b)$, and its PGSs are called $(a,b)$-PGSs. We also use the term an $\alpha$-configuration and its specifications: a $(D,\alpha )$-configuration or $((a,b),\alpha )$-configuration, or $\alpha$-configurations of type $(a,b)$, intermittently. \begin{thma} (Class {\rm{TA1}}) \begin{description} \item[{\rm{(i)}}] Let $D$ be an integer not divisible by primes of the form $3v+1$. Then in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ there is a unique $D$-sub-lattice which is horizontal and has type $(D,0)$. Thus, on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ there is a single {\rm{PGS}}-equivalence class, which contains $D^2$ different {\rm{PGS}}s. The {\rm{PGS}}s are horizontal $\big((D,0),\alpha\big)$-configurations, hence reflection-invariant. Different {\rm{PGS}}s are obtained from each other by ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts. Consequently, for such $D$ the number of {\rm{EGM}}s on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ equals $D^2$. \item[{\rm{(ii)}}] Let $D /\sqrt{3}$ be an integer not divisible by primes of the form $3v+1$. Then in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ there is a unique $D$-sub-lattice which is vertical and has type $(\frac{D}{\sqrt 3},\frac{D}{\sqrt 3})$. Thus, on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ there is a single {\rm{PGS}}-equivalence class, which contains $D^2$ different {\rm{PGS}}s. The {\rm{PGS}}s are vertical $\Big((\frac{D}{\sqrt 3},\frac{D}{\sqrt 3}),\alpha\Big)$-configurations, hence reflection-invariant. Different {\rm{PGS}}s are obtained from each other by ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts. Consequently, for such $D$ the number of {\rm{EGM}}s on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ equals $D^2$. \end{description} \end{thma} \begin{thmb} (Class {\rm{TA2}}) Let $D^2$ be an integer whose prime decomposition contains {\rm{(i)}} a factor $3$ in any power, {\rm{(ii)}} primes of the form $3v+2$, in even powers, possibly zero, and {\rm{(iii)}} a single prime of the form $3v+1$. Then in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ there are exactly two $D$-sub-lattices, which are inclined and taken to each other by reflections. Hence, on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ there is a single {\rm{PGS}}-equivalence class, which contains $2D^2$ {\rm{PGS}}s. The {\rm{PGS}}s are inclined $(D,\alpha)$-configurations, hence not reflection-invariant. Different {\rm{PGS}}s are obtained from each other by ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts and reflections. Consequently, for such $D$ the number of {\rm{EGM}}s on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ equals $2D^2$. \end{thmb} \subsection{PGSs and EGMs for Class TB}\label{SubSec3.4} For a generic $D$ from Class TB, there are at least three $D$-sub-lattices among PGSs on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. \begin{thmc} (Class {\rm{TB}}) Suppose that the prime decomposition of $D^2$ contains {\rm{(i)}} a factor $3$ in any power, {\rm{(ii)}} primes of the form $3v+2$, in even powers, possibly zero, and {\rm{(iii)}} $M\geq 2$ primes of the form $3v+1$, some of which may coincide. Then the following assertions hold true. \begin{description} \item[{\rm{(i)}}] The number of {\rm{PGS}}-equivalence classes on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ is $\geq 2$, and it increases when $\lceil M/2\rceil$ increases. \item[{\rm{(ii)}}] At most one class contains $D^2$ {\rm{PGS}}s. It consists of horizontal $(D,0)$-{\rm{PGS}}s if $D$ is integer, or of vertical $(\frac{D}{\sqrt 3},\frac{D}{\sqrt 3})$-{\rm{PGS}}s if $D/{\sqrt 3}$ is integer. All other equivalence classes contain two inclined $D$-sub-lattices and $2D^2$ {\rm{PGS}}s each; these sub-lattices are taken to each other by reflections. \item[{\rm{(iii)}}] Furthermore, a measure ${\mbox{\boldmath${\mu}$}}_{\varphi}$ is reflection-invariant iff the {\rm{PGS}} ${\varphi}$ comes from a dominant equivalence class of cardinality $D^2$. \item[{\rm{(iv)}}] Let $J=J(D,{\mathbb A}} \def\bbD{{\mathbb D}_2)$ denote the number of dominant equivalence classes labeled by $1,\ldots ,J$ in an arbitrary order. Let $m_jD^2$ stand for the number of {\rm{PGS}}s in the dominant class $j$, where $m_j=1, 2$ and $1\leq j\leq J$. Then the total number of {\rm{EGM}}s equals $D^2\sum\limits_{j=1}^Jm_j$. \end{description} \end{thmc} As we said earlier, we conjecture that the number of dominant classes $J(D,{\mathbb A}} \def\bbD{{\mathbb D}_2)=1$. This is confirmed in several examples considered in Theorems 4-6 on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and Theorem 10 on $\bbH_2$. The analysis of dominance for selected examples of $D^2$ from Class TB is given in Theorems 4--6. For these examples, we reach the level of Theorems 1, 2 in the description of the structure of EGMs. The examples have been selected to demonstrate different outcomes of the competition between inclined and horizontal or vertical equivalence classes. \begin{thmd} For $D^2=49$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, there are $147$ {\rm{PGS}}s divided in two equivalence classes: horizontal $(7,0)$ and inclined $(5,3)$. The $(7,0)$-class $\mathscr P} \def\sR{\mathscr R (7,0)$ consists of $49$ {\rm{PGS}}s and is the only one dominant. The $(7,0)$-{\rm{PGS}}s are reflection-invariant and obtained from each other by ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts. Consequently, we have in total $49$ {\rm{EGM}}s, and they all are of the form ${\mbox{\boldmath${\mu}$}}_{\varphi}$ where ${\varphi}\in\mathscr P} \def\sR{\mathscr R (7,0)$. \end{thmd} In the next result the choice of the dominant PGS class between the inclined and horizontal ones is inverted. \begin{thme} For $D^2=169$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, there are $507$ {\rm{PGS}}s divided in two equivalence classes: inclined $(8,7)$ and horizontal $(13,0)$. The $(8,7)$-class $\mathscr P} \def\sR{\mathscr R (8,7)$ consists of $338$ {\rm{PGS}}s and is the only one dominant. The $(8,7)$-{\rm{PGS}}s are not reflection-invariant; they are obtained from each other by ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts and reflections. Consequently, we have in total $338$ {\rm{EGM}}s ${\mbox{\boldmath${\mu}$}}_{\varphi}$, and they all are of the form ${\mbox{\boldmath${\mu}$}}_{\varphi}$ where ${\varphi}\in\mathscr P} \def\sR{\mathscr R (8,7)$. \end{thme} Finally, we discuss a case where we have one vertical class, $(7,7)$, and one inclined, $(11,2)$. \begin{thmf} For $D^2=147$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, there are $441$ {\rm{PGS}}s divided in two equivalence classes, the vertical $(7,7)$ and the inclined $(11,2)$. The $(7,7)$-class $\mathscr P} \def\sR{\mathscr R (7,7)$ consists of $147$ {\rm{PGS}}s and is the only one that is dominant. The $(7,7)$-PGSs are reflection-invariant and obtained from each other by ${\mathbb A}} \def\bbD{{\mathbb D}_2$-shifts. Consequently, we have in total $147$ {\rm{EGM}}s, and they all are of the form ${\mbox{\boldmath${\mu}$}}_{\varphi}$ where ${\varphi}\in\mathscr P} \def\sR{\mathscr R (7,7)$. \end{thmf} \subsection{PGSs and EGMs for Classes HA, HB and HC}\label{SubSec3.5} The results for Classes HA and HB go in parallel to those for Classes TA and TB. Recall, for a value $D^2$ from Classes HA and HB, the PGSs on $\bbH_2$ are $(D,\alpha )$-PGSs restricted to $\bbH_2$; see Theorem I(ii). We will use on $\bbH_2$ the same terminology as on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. \begin{thmg} (Class {\rm{HA1}}) \begin{description} \item[{\rm{(i)}}] Let $D/3$ be an integer not divisible by primes of the form $3v+1$. Then on $\bbH_2$ there is a single {\rm{PGS}}-equivalence class, which contains $2D^2/3$ {\rm{PGS}}s. The {\rm{PGS}}s are horizontal $((D,0),\alpha)$-configurations, hence reflection-invariant. Different {\rm{PGS}}s are obtained from each other by $\bbH_2$-shifts. Consequently, the number of {\rm{EGM}}s on $\bbH_2$ equals $2D^2/3$. \item[{\rm{(ii)}}] Let $D/\sqrt{3}$ be an integer not divisible by primes of the form $3v+1$. Then on $\bbH_2$ there is a single {\rm{PGS}}-equivalence class, which contains $2D^2/3$ {\rm{PGS}}s. The {\rm{PGS}}s are vertical $\left((\frac{D}{\sqrt 3},\frac{D}{\sqrt 3}),\alpha\right)$-configurations, hence reflection-invariant. Different {\rm{PGS}}s are obtained from each other by $\bbH_2$-shifts. Consequently, the number of {\rm{EGM}}s on $\bbH_2$ equals $2D^2/3$. \end{description} \end{thmg} \begin{thmh} (Class {\rm{HA2}}) Let $D^2$ be an integer whose prime decomposition contains {\rm{(i)}} at least one factor 3, {\rm{(ii)}} primes of the form $3v+2$, in even powers, possibly zero, and {\rm{(iii)}} a single prime of the form $3v+1$. Then on $\bbH_2$ there is a single {\rm{PGS}}-equivalence class, which contains $4D^2/3$ {\rm{PGS}}s. The {\rm{PGS}}s are inclined $(D,\alpha)$-configurations, hence not reflection-invariant. Different {\rm{PGS}}s are obtained from each other by shifts and reflections. Consequently, the number of {\rm{EGM}}s on $\bbH_2$ equals $4D^2/3$. \end{thmh} In Theorem 9 we use the same terminology of dominant classes as in Theorem 3 \begin{thmi} (Class {\rm{HB}}) Suppose that the prime decomposition of $D^2$ contains {\rm{(i)}} at least one factor 3, {\rm{(ii)}} primes of the form $3v+2$, in even powers, possibly zero, and {\rm{(iii)}} at least two prime factors of the form $3v+1$, some of which may coincide. Then all {\rm{PGS}}s are $(\bbH_2,D,\alpha )$-configurations obtained as the restrictions to $\bbH_2$ of their $({\mathbb A}} \def\bbD{{\mathbb D}_2,D,\alpha )$-counterparts. Thus, the number of {\rm{PGS}}-equivalence classes on $\bbH_2$ is the same as on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. Furthermore, the assertions {\rm{(ii)}}--{\rm{(iv)}} of Theorem $3$ are transferred from ${\mathbb A}} \def\bbD{{\mathbb D}_2$ to $\bbH_2$, with the proviso that the number of {\rm{PGS}}s in the equivalence classes is $2D^2/3$ in place of $D^2$ and $4D^2/3$ in place of $2D^2$. Hence, in assertion {\rm{(iv)}}, the total number of $D$-{\rm{PGS}}s on $\bbH_2$ should be equal to $D^2\sum\limits_{j=1}^Jm_j$ where $m_j=2/3,4/3$, $1\leq j\leq J$, and $J=J(D,\bbH_2)$. \end{thmi} We conjecture that, for any $D^2$ from Class HB, $J(D,\bbH_2)=1$. An analog of Theorem 6 is \begin{thmj} For $D^2=147$ (Class {\rm HB}), there are $294$ {\rm{PGS}}s divided in two equivalence classes, the vertical $(7,7)$ and the inclined $(11,2)$. The $(7,7)$-class $\mathscr P} \def\sR{\mathscr R(7,7)$ consists of $98$ {\rm{PGS}}s and is the only one dominant. The {\rm{PGS}}s ${\varphi}\in\mathscr P} \def\sR{\mathscr R (7,7)$ are reflection-invariant and obtained from each other by $\bbH_2$-shifts. Consequently, we have in total $98$ {\rm{EGM}}s ${\mbox{\boldmath${\mu}$}}_{\varphi}$, where ${\varphi}\in\mathscr P} \def\sR{\mathscr R (7,7)$. \end{thmj} Results for Class HC are given in Theorem 11 below. \begin{thmk} Assume $D^2$ is not divisible by 3 and not from Classes {\rm HD}, {\rm HE} or {\rm HS}. Consider the number $D^*$ such that $D^*>D$ and $(D^*)^2$ is the closest L\"oschian number to $D^2$ divisible by $3$. Then the {\rm PGS}s on $\bbH_2$ are the $(D^*,\alpha )$-configurations. \begin{description} \item[{\rm (A1)}] Suppose the value $D^*$ belongs to Class {\rm HA1}. Then the assertions of Theorem $7$ can be repeated with $D$ replaced by $D^*$. In particular, the number of {\rm EGM}s on $\bbH_2$ equals $2(D^*)^2/3$. \item[{\rm (A2)}] Suppose the value $D^*$ belongs to Class {\rm HA2}. Then the assertions of Theorem $8$ can be repeated with $D$ replaced by $D^*$. In particular, the number of {\rm EGM}s on $\bbH_2$ equals $4(D^*)^2/3$. \item[{\rm (B)}] Suppose that the above value $(D^*)^2>D^2$ belongs to Class {\rm HB}. Then the assertions of Theorem $9$ can be repeated with $D$ replaced by $D^*$. In particular, the total number of {\rm EGM}s on $\bbH_2$ equals $(D^*)^2\sum\limits_{j=1}^Jm_j$ where $m_j=2/3,4/3$, $1\leq j\leq J$, and $J=J(D^*,\bbH_2)$. \end{description} \end{thmk} \subsection{PGSs and EGMs for Class HD}\label{SubSec3.6} To conclude our results, it remains to consider exceptional non-sliding values $D^2$. For Class HD we have the following \begin{thml} Assume $D^2>1$ is from Class {\rm HD}. \begin{description} \item[{\rm{(i)}}] For $D^2=13,\, 28,\, 49,\, 64,\, 97,\, 157$ (sub-class {\rm{HD1}}): the number of {\rm{PGS}}s on $\bbH_2$ equals $66$, $132$, $222$, $288$, $426$ and $678$, respectively, and they are $(D,\beta )$-configurations. The {\rm{PGS}}s are not reflection-invariant and are obtained from each other by shifts and reflections. The number of the {\rm{EGM}}s matches that of the {\rm{PGS}}s. \item[{\rm{(ii)}}] For $D^2=16, 256$ (sub-class {\rm{HD2}}): the number of {\rm{PGS}}s on $\bbH_2$ equals $54$ and $726$, respectively, and they are $(D,\gamma )$-configurations. The {\rm{PGS}}s are not reflection-invariant and are obtained from each other by shifts and reflections. The number of the $D$-{\rm{EGM}}s matches that of the {\rm{PGS}}s. \end{description} \end{thml} \subsection{PGSs and EGMs for Class HE}\label{SubSec3.7} Class HE ($D^2=67$) is the one where the description of EGMs requires new techniques and is not given in this paper. However, the analysis of the PGSs can be done. \begin{thmm} For the value $D^2=67$ (Class {\rm HE}): there are $300$ {\rm{PGS}}s of type $(D,\beta )$ and $50$ {\rm{PGS}}s of type $(D^*,\alpha )$, with $(D^*)^2=75$. \end{thmm} As was said earlier, we conjecture that $(D^*,\alpha )$-PGSs form the only dominant equivalence class, and so the number of $D$-{\rm{EGM}}s equals $50$. \section{The PGSs on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ via MRA-triangles}\label{Sec4} To prove Theorem I, we develop a united approach to the analysis of PGSs covering the whole variety of cases in Theorems 1--13. It is based on the notion of a re-distributed area of a triangle in the Delaunay triangulation (DT) for a $D$-AC $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ and the concept of a MRA-triangle minimizing a re-distributed area. In our approach, we have been inspired by (i) an idea of a local energy minimizer serving as an indicator of a PGS (see \cite{HS}) and (ii) a specific choice of a minimizer as a triangle area in a DT and a related notion of a saturated configuration (see \cite{ChW}). Elements of such an approach have been used for lattice $\bbZ^2$ in \cite{MSS1}, Sect. \ref{Sec3}. \subsection{V-cells, C-triangles and saturated configurations}\label{SubSec4.1} The key point of our construction is that maximizing the number of particles in an AC $\phi$ can be done through minimizing triangle areas in the Delaunay triangulation of $\phi$; see below. One caveat here is that minimization should exclude `sliver' obtuse triangles (as their area can be arbitrarily small). The other caveat is that minimization is applied not to the `standard' triangle area but to its modification which we call a re-distributed (RD) area $s^{\rR\rD} (\triangle )$. And finally, one has to verify that the triangles minimizing the RD-area (MRA-triangles) form a tessellation of the whole ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$. Let us pass to a formal argument. Consider an arbitrary set ${{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H}} \subset {{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}}^2$, with at least two points, such that $\rho({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\by) \ge D$ for any two distinct ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by \in {{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H}}$. For each ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in {{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H}}$ define the {\it Voronoi cell} ${\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, {\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H})$ as the set of points $\bz \in {{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}}^2$ satisfying $\rho({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\bz) \le \rho(\by,\bz)$, $\forall$ $\by \in {{\mathbb E}} \def\bbF{{\mathbb F}} \def\bbH{{\mathbb H}} \setminus \{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\}$. The Voronoi cells (V-cells, for short) are always convex polygons. \FigureM13 We apply the above definition to a given $D$-AC $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D)$ with at least two particles; this yields a collection of {\it Voronoi cells} ${\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi )$ constructed for the occupied sites ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\phi$. Here $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D)$ may stand for $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D,{\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2)$ or $\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D,{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2)$. If $\phi$ has no unbounded V-cells then to each cell ${\mathcal V}} \def\cW{{\mathcal W}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi )$ there is assigned a finite set of circles centered at the vertices of ${\mathcal V}} \def\cW{{\mathcal W}({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi )$ and passing through ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$. We call them {\it V-circles} in $\phi$. Each ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\phi$ lies in at least one of V-circles but no ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\phi$ falls inside a circle. The sites $\by\in\phi$ lying in a given V-circle form the vertices of a {\it constituting polygon}. These polygons form a tessellation of ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$: they have disjoint interiors, and the union of their closures gives the entire plane. If a constituting polygon has $\geq 3$ vertices, it can be divided (non-uniquely) into {\it constituting triangles} (in short: {\rm C}-triangles); this produces the {\it Delaunay triangulation} (DT) of $\phi$ (and of ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$). See Figure 13 (a). \begin{lemma} \label{Lem4.1} Let $\triangle$ be a {\rm C}-triangle in a $D$-{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C}{\rm C} $\phi$ and consider $3$ pair-wise disjoint disks of diameter $D$ centered at the vertices of $\triangle$. Consider $3$ sectors in these disks which are intersections of the circles with the angles of $\triangle$ and let $\bbS (\triangle )$ denote the union of these sectors. Then the area of $\bbS (\triangle)$, i.e., the sum of the areas of these $3$ sectors, equals $\pi D^2/8$. \end{lemma} \begin{proof} Let us stress that $\bbS (\triangle )$ not necessarily lies completely inside triangle $\triangle$. Nevertheless, the sets $\bbS (\triangle )$ where $\triangle$ runs over {\rm C}-triangles of $\phi$ form a partition of the union of the disks $\operatornamewithlimits{\cup}\limits_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\phi}\bbD({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},D/2)$ (modulo a set of measure $0$). Here $\bbD(u,r)$ stands for the disk of radius $r>0$ centered at $\bu\in{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$: $\bbD(\bu, r) =\{\by\in{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2:\rho (\bu,\by)\leq r\}$. For each angle of size $\alpha$ in $\triangle$ the intersection with the corresponding disk is a full sector with the angular measure $\alpha$ and area $\alpha D^2/8$. The sum of the triangle angles equals $\pi$. Cf. Figure 13 (b). \end{proof} Lemma \ref{Lem4.1} establishes a principal fact that the number of particles in an AC $\phi$ equals the doubled number of C-triangles in the DT. Hence, to maximize the number of particles one would like to minimize the area of C-triangles. However, since the triangular areas in a DT can be arbitrarily small (when a C-triangle is obtuse and has a large circumradius, i.e, the radius of the corresponding V-circle), we use the idea of saturation allowing us to discard C-triangles that have the area close to 0; see Lemma \ref{Lem4.2}. \def{\alpha}{{\alpha}} A $D$-AC $\phi$ is called {\it saturated} if no occupied site can be added to it without breaking admissibility. A {\it saturation} of a given $D$-AC $\phi$ is a completion of $\phi$ (in some uniquely defined way) with the maximal possible amount of added occupied sites. Clearly, every {\rm{PGS}} configuration is saturated (this is also true for non-periodic GSs). Saturated configurations are convenient as a natural initial step in a procedure of identifying PGSs within the set ${\mathcal A}(D)$ of admissible configurations. The use of saturated configurations also makes more transparent the derivation of the Peierls bound in Section \ref{SubSec5.1}. The idea of a saturated configuration worked well in the study of dense-packed circle configurations in ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$; cf. \cite{ChW}. We attempt to emulate a similar approach on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$. It generates some technical complications that are addressed in Lemmas \ref{Lem4.2} - \ref{Lem4.10}. \begin{lemma}\label{Lem4.2} A saturated configuration on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$ does not have \rV-circles of radius $\geq D+1$. \end{lemma} \begin{proof} Suppose there exists a V-circle of radius $\geq D+1$. The center of the V-circle may not lie in ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$ but is at distance $\leq 1$ from one of the ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-sites. Then an additional particle can be added at this site without breaking admissibility. This contradicts the saturation assumption. \end{proof} We would like to note a difference between Lemma \ref{Lem4.2} and Lemma 2 from \cite{ChW}. We have a lower bound $D+1$ whereas in \cite{ChW}, Lemma 2, one has $D$. This creates a particular technical complication arising on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$ compared with ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$. Lemma \ref{Lem4.2} enables us to discard C-triangles which have a circumradius $> D+1$ and focus on those with a circumradius $\leq D+1$ in our analysis of PGSs. The remaining obtuse C-triangles are tackled via a routine of the area re-distribution. More precisely, C-triangles with circumradius $\leq D-1$ are tackled in Lemmas \ref{Lem4.4}, 4.8 and 4.10, depending upon the class of the value $D^2$. In Lemmas 4.5.1 - 4.5.3 we treat C-triangles on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$ with a circumradius between $D-1$ and $D+1$. Such a C-triangle, let us denote it by $\triangle$, can have area $<S(D)/2$ (when $\triangle$ is obtuse). However, it turns out that in this case there will be an adjacent {\rm C}-triangle $\triangle'$ (sharing a side with $\triangle$) with a rather large area, so that the area of the union $\triangle\cup\triangle'$ is $\geq S(D)+1$. It may also happen that two or three {\rm C}-triangles $\triangle_j$, of area $<S(D)/2$ each, share a common adjacent triangle $\triangle'$; in this case there will again be a lower bound upon the area of their union. Such an observation allows us to circumspect obtuse C-triangles via Lemmas 4.5.1 - 4.5.3. For formal definitions, see Section \ref{SubSec4.2}. \subsection{Redistributed areas for triangles}\label{SubSec4.2} In this section we introduce re-distributed areas assigned to a {\rm C}-triangle $\triangle$, which can be conveniently lower-bounded. One, $s^{\rR\rD}(\triangle )$, characterizes the triangle {\it per se}, the other, $\varSigma (\triangle ,\phi)$, considers it in a $D$-AC $\phi$. The bounds involve the quantities $S(D)$ and $S^{\rR\rD} (D)$ determined in \eqref{SoD} and \eqref{SRD(D)}, respectively. This will enable us to analyze the PGSs for the whole array of the situations on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$, including the exceptional non-sliding values $D^2=13, 16, 28, 49, 64, 67, 97, 157, 256$ (Classes HD and HE). An ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-triangle $ABC$ is called a {\it qualifying triangle} if all its side-lengths are $\geq D$ while the circumradius is $\leq D+1$. All triangles we consider from now are supposed to be qualifying. A collection of two triangles $ABC$ and $ACE$ with mutually disjoint interiors is called a 2-{\it triangle group} if all sides and diagonals of quadrilateral $ABCE$ are not shorter than $D$, vertex $E$ does not lie inside the circumcircle of $ABC$, and vertex $B$ does not lie inside the circumcircle of $ACE$. A collection of three triangles $ABC$, $CDE$ and $ACE$ with mutually disjoint interiors is called a 3-{\it triangle group} if all sides and diagonals of pentagon $ABCDE$ are not shorter than $D$, vertices $D, E$ do not lie inside the circumcircle of triangle $ABC$, vertices $A, B$ do not lie inside the circumcircle of $CDE$, and vertices $B, D$ do not lie inside the circumcircle of $ACE$. A collection of four triangles $ABC$, $CDE$, $EFA$ and $ACE$ with mutually disjoint interiors is called a 4-{\it triangle group} if all sides and diagonals of hexagon $ABCDEF$ are not shorter than $D$, vertices $D, E, F$ do not lie inside the circumcircle of $ABC$, vertices $F, A, B$ do not lie inside the circumcircle of $CDE$, vertices $B, C, D$ do not lie inside the circumcircle of $EFA$, and vertices $B, D, F$ do not lie inside the circumcircle of triangle $ACE$. For each triangle group one can calculate the corresponding average triangle area which we call the {\it re-distributed group area}. For any triangle $ABC$ one can consider all triangle groups containing this triangle such that side $AB$ is shared with another triangle in the group but sides $BC$ and $CA$ are not shared. The minimal redistributed group area among all such groups is called the {\it $AB$-re-distributed area} of $ABC$ and denoted by $s^{{\rR\rD}}_{AB}(ABC)$. The $BC$-re-distributed area $s^{\rR\rD}_{BC}(ABC)$ and $CA$-re-distributed area $s^{{\rR\rD}}_{CA}(ABC)$ are defined in a similar way. The quantity \begin{equation}\label{RDarea-s}s^{\rR\rD}(ABC) = \max\big(s(ABC),s^{{\rR\rD}}_{AB}(ABC),s^{{\rR\rD}}_{BC}(ABC), s^{{\rR\rD}}_{CA}(ABC)\big)\end{equation} is called the {\it re-distributed area of triangle} $ABC$. Here and below $s(ABC)$ stands for the area of $ABC$; a similar meaning will have the notation $s(\triangle)$, $s(\triangle\cup\triangle')$ and so on. If the maximum in \eqref{RDarea-s} is achieved at $s^{ {\rR\rD}}_\bullet (ABC)$ then the corresponding triangle side is called a {\it re-distributing side} (of $ABC$) and denoted by $\sigma (ABC)$. \FigureN14 The doubled minimal redistributed area is denoted by $S^{\rR\rD} (D)=S^{\rR\rD} (D,{\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2)$: \begin{equation}\label{SRD(D)}\bear S^{\rR\rD} (D)=2\times\min\,\Big[s^{\rR\rD}(\triangle ):\;\triangle\;\hbox{ runs over the triangles on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$}\Big].\end{array}\end{equation} A triangle $\triangle$ with minimal re-distributed area, i.e., with $s^{\rR\rD} (\triangle )=S^{\rR\rD} (D)/2$, is called an {\it MRA-triangle}. Note that if $\triangle$ has an area $s(\triangle )< s^{\rR\rD} (\triangle )$ then $\triangle$ has a re-distributing side. An important observation is that, by virtue of Lemma \ref{Lem4.1} and the definition of an MRA-triangle, any PGS consists of MRA-triangles. Cf. \cite{HS}, Criterium on P179. More precisely, we say that a $D$-AC is MRA-perfect if its C-triangles are all MRA-triangles. \begin{lemma} \label{Lem4.3} Given an attainable $D^2$, suppose that there exists a perfect $D$-{\rm{AC}} $\phi$. Then any $D$-{\rm{PGS}} is a perfect configuration. \end{lemma} \begin{proof} Owing to Lemma \ref{Lem4.1}, the particle density in $\phi$ is $1/S^{\rR\rD} (D)$. Then any periodic $D$-AC has the particle density $\leq 1/S^{\rR\rD} (D)$. Next, let $\psi$ be any periodic $D$-AC containing a non-MRA-triangle. Then, in a large basic quadrilateral polygon ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$, one can construct a perturbation of $\psi$ having more particles in ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ than $\psi$ has. In fact, such a perturbation will have the same pattern as $\phi$ in ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$. We will have to remove some particles from $\psi$ along the boundary $\partial{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$ but will gain an amount of particle proportional to the number of sites in ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$. Consequently, any PGS should consist of MRA-triangles. \end{proof} \begin{rb} {\rm In the course of this section, we will check that for any attainable $D^2$ on both ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and $\bbH_2$ there exists at least one perfect $D$-AC. Moreover, as we show further in this section, for any non-sliding $D$ the number of perfect configurations (and hence that of the PGSs) is finite. Furthermore, it will be shown that every perfect configuration is periodic, hence a PGS. Consequently, any non-periodic ground state contains at least one infinite connected component of non-MRA triangles and no finite ones. Moreover, the number of non-MRA triangles in a ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-hexagon ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z} (L)$ of side-length $L$ can only grow linearly with $L$; this means that non-MRA triangles form, effectively, a one-dimensional array. Let us repeat once more that, according to \cite{DoS}, non-periodic ground states do not generate EGMs on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$.} $\blacktriangle$ \end{rb} \begin{lemma}\label{Lem4.4} For any $D^2$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and for any $D^2$ divisible by $3$ on $\bbH_2$: \begin{description} \item[{\rm{(i)}}] for a $D$-triangle $\triangle^\circ$, we have $s^{\rR\rD} (\triangle^\circ )={\sqrt 3}D^2/4=S(D)/2$ (cf. Eqn \eqref{SoD}), \item[{\rm{(ii)}}] $\forall$ $D$-admissible ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-triangle $\triangle$ non-congruent to $\triangle^\circ$ such that the circumradius of $\triangle$ is $\leq D-1$, we have \begin{equation}\label{sRDtrianlgebound} \qquad\qquad\qquad\qquad\qquad s^{\rR\rD} (\triangle)\geq s^{\rR\rD} (\triangle^\circ )+\frac{\sqrt 3}{8}.\end{equation} \end{description} \end{lemma} \begin{proof} (i) A $D$-triangle $ABC$ can be complemented by its reflection about a given side, say $AB$, to form a 2-triangle group. Hence, $s^{\rR\rD}_{AB}(ABC)\leq S(D)/2$. By definition \eqref{RDarea-s}, it follows that $s^{\rR\rD} (ABC)= S(D)/2$. (ii) The triangles under consideration in assertion (ii) have the maximum angle strictly between $\pi/3$ and $2\pi/3$. The sinus of such an angle is $>{\sqrt 3}/2$. Hence, $s (\triangle )> \displaystyle\frac{D^2{\sqrt 3}}{4}=\frac{S(D)}{2}$. This implies \eqref{sRDtrianlgebound} for the area of an ${\mathbb A}} \def\bbD{{\mathbb D}_2$-triangle multiplied by $8/{\sqrt 3}$ is integer. \end{proof} Consider an arbitrary saturated $D$-AC $\phi$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$ and identify all triangles in $\phi$ with an area $<S^{\rR\rD} (D)/2$. For each such triangle $\triangle$ consider a 2-triangle group $\triangle\cup\triangle'$ formed by a triangle $\triangle'$, called a {\it donor}, adjacent to $\triangle$ along the redistributing side $\sigma (\triangle )$. If several such 2-triangle groups have a common donor $\triangle'$ then we unite them into a single 3-triangle group or 4-triangle group. (By construction, a donor $\triangle'$ has area $>S^{\rR\rD} (D)/2$.) In case of a 3-triangle group we have a donor $\triangle'$ with area $\geq S^{\rR\rD} (D)/2$ grouped with two adjacent triangles of area $<S^{\rR\rD} (D)/2$. In case of a 4-triangle group we have a donor $\triangle'$ of area $s (\triangle')\geq S^{\rR\rD} (D)/2$ grouped with three adjacent triangles of area $<S^{\rR\rD} (D)/2$. By construction, each triangle $\triangle$ in $\phi$ belongs to at most one group. Furthermore, the grouping uniquely assigns the {\it redistributed group area} $\varSigma (\triangle ,\phi)$ to each triangle $\triangle$ in the AC $\phi$. Namely, $\varSigma (\triangle ,\phi)$ is the total area of the triangles in the group divided by the number of the triangles in the group containing $\triangle$ in $\phi$. Next, if $\triangle$ is not a donor then $\varSigma (\triangle ,\phi)\geq s^{\rR\rD} (\triangle )\geq S^{\rR\rD} (D)/2$. If $\triangle$ is a donor we have that $s^{\rR\rD} (\triangle )\geq s(\triangle ) >\varSigma (\triangle ,\phi)\geq S^{\rR\rD} (D)/2$; the last inequality holds since $\varSigma (\,\bullet\, ,\phi)$ is the same for all members in the group. Finally, if $\triangle$ does not belong to any group in $\phi$ then $s^{\rR\rD} (\triangle )\geq s (\triangle ) =\varSigma (\triangle ,\phi) \geq S^{\rR\rD} (D)/2$; the equality $s (\triangle ) =\varSigma (\triangle ,\phi)$ and inequality $\varSigma (\triangle ,\phi) \geq S^{\rR\rD} (D)/2$ follow directly from the way in which $\triangle$ is identified in $\phi$. \subsection{Equality $S^{\rR\rD} (D)=S(D)$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and -- for $3|D^2$ -- on $\bbH_2$} \label{SubSec4.3} In this section we give three lemmas, 4.5.1--4.5.3, treating C-triangles on ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$ with a circumradius $r=D+\delta$, $\delta\in [-1,1]$. Then we proceed with Lemma 4.6 which, together with Lemma \ref{Lem4.3}, establishes the equality $S^{\rR\rD} (D)=S(D)$ under some conditions upon $D^2$. \def{\rm C}{{\rm C}} \begin{lema} \label{Lem4.5.1} Suppose that a {\rm C}-triangle $\triangle$ has the circumradius $r=D+\delta$ where $-1\leq\delta\leq 1$. Then \begin{equation}\label{eq:Tsep'} \qquad\qquad\qquad s(\triangle )\geq \displaystyle\frac{D^3}{2r}\sqrt{1-\frac{D^2}{4r^2}} >\displaystyle\frac{{\sqrt 3}D^2}{4}-\frac{D\delta}{2\sqrt 3}.\end{equation} Here $\displaystyle\frac{D^3}{2r}\sqrt{1-\frac{D^2}{4r^2}}$ is the area of an isosceles triangle with circumradius $r$ and two side-lengths $D$. The longest side in this triangle has length $\displaystyle <D{\sqrt 3}+\frac{\delta}{\sqrt 3}$. \end{lema} \begin{proof} Suppose a {\rm C}-triangle $\triangle$ with vertices $A, B, C$ satisfies the assumptions of the lemma. Let the side-lengths be $AB=l_0$, $BC=l_1$, $CA=l_2$, with $D\leq l_0\leq l_1\leq l_2\leq 2r$. If two side-lengths are $>D$, say $l_1 ,l_2>D$, then the area of the $\triangle$ can be made smaller by moving vertex $C$ along the circumcircle towards $B$, until the length of side $BC$ becomes $D$. Indeed, in the process of motion $l_0$ remains fixed but the height from $C$ to $AB$ shortens. Thus, the area of $\triangle$ is lower-bounded by the area of an isosceles triangle with two side-lengths $D$ and the remaining side-length $\displaystyle 2D\sqrt{1-\frac{D^2}{4r^2}}$. (On $\bbH_2$, it is not necessarily an $\bbH_2$-triangle.) A direct calculation shows that for $D\geq 1$ and $\delta\in (-1,1)$ the bound $\displaystyle 2D\sqrt{1-\frac{D^2}{4r^2}} < D{\sqrt 3}+\frac{\delta}{\sqrt 3}$ holds true. (The right-hand side is simply the Taylor expansion in $\delta$ up to order $1$.) The area of such a triangle equals $\displaystyle\frac{D^3}{2r}\sqrt{1-\frac{D^2}{4r^2}}$. Finally, $\displaystyle\frac{D^3}{2r}\sqrt{1-\frac{D^2}{4r^2}}= \frac{D^3}{2(D+\delta )}\sqrt{1-\frac{D^2}{4(D+\delta)^2}} >\frac{{\sqrt 3}D^2}{4}-\frac{D\delta}{2\sqrt 3}$. \end{proof} \begin{lemb}\label{Lem4.5.2} Suppose that a {\rm C}-triangle $\triangle$ with side-lengths $l_0, l_1, l_2$ has the circumradius $r=D+\delta$ where $-1\leq\delta\leq 1$. Consider an adjacent {\rm C}-triangle $\triangle'$ that shares with $\triangle$ the longest side (of length $l_2$). Then the area $s(\triangle\cup\triangle' )$ is lower-bounded by the area of a trapeze inscribed in a circle of radius $r$, with three sides being of length $D$. Furthermore, for $D^2\geq 400$ we have $s(\triangle\cup\triangle')\geq \displaystyle\frac{3{\sqrt 3}D^2}{4}-2\delta^2$. \end{lemb} \begin{proof} Again, we assume $D\leq l_0\leq l_1\leq l_2\leq 2r$. Two vertices of triangle $\triangle'$ are the end-points of the side of length $l_2$ and lie in the V-circle of radius $r$ circumscribing $\triangle$. The third vertex of $\triangle'$ cannot lie inside this V-circle but can be placed on the circle. It also should lie outside the circles of radius $D$ centered at the end-points of the side of length $l_2$. Under these restrictions, the minimal area of $\triangle'$ is not less than the area of a triangle inscribed in the V-circle which shares the side of length $l_2$ with $\triangle$ and has the other side of length $D$. (Cf. the proof of Lemma 4.5.1.) If we now minimize the area of $\triangle$, we obtain a pair $\triangle$, $\triangle'$ forming a trapeze, as specified in the assertion of Lemma 4.5.2. (Again, on $\bbH_2$ the resulting triangle is not necessarily an $\bbH_2$-triangle.) The area of the trapeze in question equals the sum of the areas of 4 triangles, 3 of which are identical. The area of each of these identical triangles is $\displaystyle\frac{r^2}{2}\sin (2\alpha )$ where $\sin (\alpha )=\displaystyle\frac{D}{2r}$. The area of the fourth triangle is $\displaystyle\frac{r^2}{2}\sin (2\pi -6\alpha)$. All-in-all, the area of the trapeze is $\displaystyle\frac{r^2}{2}4\sin^3(\alpha )$, which equals $$\frac{2D^3}{r}\left(\sqrt{1-\frac{D^2}{4r^2}}\right)^3=\frac{3{\sqrt 3}D^2}{4}-{\sqrt 3}\delta^2 +\frac{19\delta^3}{3{\sqrt 3}D}-\frac{113\delta^4}{9{\sqrt 3}D^2}+\ldots $$ A straightforward calculation asserts that for $D^2\geq 400$ and $-1\leq\delta\leq 1$ this expression is $\displaystyle\geq\frac{3{\sqrt 3}D^2}{4}-2\delta^2$, as claimed in the lemma. \end{proof} \begin{lemc} \label{Lem4.5.3} Suppose that a ${\rm C}$-triangle $\triangle$ has the circumradius $r=D+\delta$ where $-1\leq\delta\leq 1$. Let $\triangle'$ be the adjacent {\rm C}-triangle sharing the longest side with $\triangle$ (cf. Lemma {\rm{4.5.2}}). \begin{description} \item[{\rm{(i)}}] Suppose that $\triangle'$ is adjacent to another {\rm C}-triangle, $\triangle_1$, with circumradius $r_1=D+\delta_1$ where $-1\leq\delta_1\leq 1$. Then we have $s(\triangle') \geq 3D^2/4$. \item[{\rm{(ii)}}] Further, suppose $\triangle'$ is adjacent to other two {\rm C}-triangles, $\triangle_1$ and $\triangle_2$, with circumradii $r_1=D+\delta_1$ and $r_2=D+\delta_2$ where $-1\leq\delta_1,\delta_2\leq 1$. Then $s(\triangle' )\geq D^2$. \end{description} \end{lemc} \begin{proof} (i) Here the triangle $\triangle'$ has one side-length $\geq D$ and two others $\geq D\sqrt 3$ by construction. On the other hand, the side-lengths are $\leq 2D+2$ since the circumradius is \ $\leq D+1$. Therefore, the area of $\triangle'$ is greater than or equal to the area of a triangle with side-lengths $D$, $D{\sqrt 3}$, $D{\sqrt 3}$. The area of such a triangle is, clearly, $\geq 3D^2/4$. (ii) In this case all side-lengths of $\triangle'$ are $\geq D\sqrt 3$. Hence, the area of $\triangle'$ is $\geq D^2$. \end{proof} \begin{lemd}\label{Lem4.6} For any $D^2$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and for any $D^2$ divisible by $3$ on $\bbH_2$, we have that \begin{equation}\label{SRD=S} \qquad\qquad\qquad\qquad S^{\rR\rD} (D)=S(D) ,\end{equation} and the equality $S^{\rR\rD} (D)=2s^{\rR\rD} (\triangle )$ is attained only when $\triangle$ is a $D$-triangle. For each of these values of $D$, the corresponding {\rm{MRA}}-perfect configuration exists and has type $(D,\alpha )$. \end{lemd} \begin{proof} In the situation of Lemma 4.4 we have the bound \begin{equation}\label{sRDbound}2s^{\rR\rD} (\triangle)\geq S(D)+\frac{\sqrt 3}{4}.\end{equation} Next, in the situation of Lemma {\rm{4.5.2}} (in particular, for $D^2\geq 400$) we have: \begin{equation}\label{MinAreaS(D)1}2s^{\rR\rD} (\triangle )\geq \frac{3}{2}S(D)-2\delta^2\geq S(D)+\frac{\sqrt 3}{2}.\end{equation} Next, in case {\rm{(i)}} of Lemma {\rm{4.5.3}}, \begin{equation}\label{MinAreaS(D)2}\displaystyle 3s^{\rR\rD} (\triangle )\geq 2\left (2S(D) -\frac{D\delta}{2\sqrt 3}\right)+\frac{\sqrt 3}{2}S(D)\geq \frac{3}{2}\left(S(D)+\frac{\sqrt 3}{2}\right).\end{equation} Finally, in case {\rm{(ii)}} of Lemma {\rm{4.5.3}}, \begin{equation}\label{MinAreaS(D)3}\displaystyle 4s^{\rR\rD} (\triangle )\geq 3\left(2S(D)-\frac{D\delta}{2\sqrt 3}\right) +\frac{2}{\sqrt 3}S(D)\geq 2\left(S(D)+\frac{\sqrt 3}{2}\right).\end{equation} Together, \eqref{sRDbound}, \eqref{MinAreaS(D)1}, \eqref{MinAreaS(D)2}, \eqref{MinAreaS(D)3} imply that for $D^2\geq 400$ and every C-triangle $\triangle$ different from a $D$-triangle: $2s^{\rR\rD} (\triangle )>S(D)$. This implies the assertion of Lemma 4.6 for $D^\geq 400$. For $1\leq D^2< 400$ the proof is done by a computer enumeration. \end{proof} \subsection{MRA-triangles for Class HC on $\bbH_2$}\label{SubSec4.4} Next, we analyze the situation on $\bbH_2$, for values $D^2$ not divisible by $3$. \begin{leme} \label{Lem4.7} For any L\"oschian $D^2\geq 300$ there exists a L\"oschian number that is $\geq D^2$, is divisible by $3$ and is at distance at most $18{\sqrt D}$ from $D^2$. \end{leme} \begin{proof} Consider L\"oschian numbers divisible by 3 of the form $$(l-3k)^2 + (l+3k)^2 + (l-3k)(l+3k) = 3l^2+9k^2.$$ (It is simply the set of all L\"oschian numbers scaled 3 times.) Now, take an arbitrary L\"oschian number $D^2$ and find $l$ such that $3l^2 \le D^2 \le 3(l+1)^2$. Then find $k$ such that $$3l^2 + 9k^2 \le D^2 \le \min\big(3(l+1)^2,\; 3l^2 + 9(k+1)^2\big)$$ Then $9k^2 \le 6l+3$, i.e. $k \le \sqrt{(2l+1)/3}$. The distance from $D^2$ to $\min\big(3(l+1)^2,\; 3l^2 + 9(k+1)^2\big)$ is at most \begin{equation}\begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} 9(k+1)^2 -9k^2 = 18k+9 \\ \qquad\qquad < 18 \sqrt{(2l+1)/ 3} + 9 < 18 \sqrt{l} \le 18 \sqrt{\sqrt{D^2 /3}} \le 18{\sqrt D},\end{array}\end{equation} where inequality involving $l$ in the middle is true for $l > 9$. \end{proof} In what follows, we refer to $D^*(=D^*(D))$ as the nearest L\"oschian number not less than $D$ such that $3|(D^*)^2$. \begin{lemf}\label{Lem4.8} Any non-equilateral $D$-admissible $\bbH_2$-triangle $\triangle$ with circumradius \ $\leq D-1$ and the shortest side-length $<D^*$ has at least one side with squared length $\geq D^2 + D+1$. Consequently, for the double area $2s(\triangle )$ we have: \begin{equation}\label{eq:L4.8_1} \begin{array}{l}} \def\beac{\begin{array}{c}} \def\bear{\begin{array}{r} 2s(\triangle )\geq h(D)\;\hbox{ where}\\ \displaystyle\qquad h(D)=\min\,\bigg[\frac{D^3}{D-1} \sqrt{1-\frac{D^2}{4(D-1)^2}},\;\frac{1}{2}\sqrt{(3D^2-D-1)(D^2+D+1)}\bigg]. \end{array}\end{equation} Furthermore, for $D^2\geq 12$: \begin{equation}\label{eq:L4.8_2} \displaystyle \qquad\qquad h(D)>\frac{\sqrt 3}{2}D^2 + \frac{D}{2{\sqrt 3}}.\end{equation} \end{lemf} \FigureO15 \begin{proof} Referring to Figure 15, suppose that triangle $\triangle$ is $OCB''$, and its shortest side is $OC$, with $D\leq |OC|<D^*$. Hence, $|OC|^2$ is not divisible by 3. Let $B'$ be the vertex of an equilateral triangle, with $|OB'|=|CB'|=|OC|$. Then $B'$ will be at the center of a unit hexagon, as shown in Figure 15. Consequently, $|B'B''| \ge 1$, and at least one of the triangles $OB'B''$ or $CB'B''$ is obtuse with the corresponding obtuse angle $\geq 2\pi/ 3$. Therefore, by the cosine theorem the squared length of the longest side of the obtuse triangle is at least $D^2 + D + 1$. Hence, as long as triangle $OCB''$ is acute, we have that $s(OCB'')\geq\displaystyle\frac{1}{2} \sqrt{(3D^2-D-1)(D^2+D+1)}$. The latter value is the area of an isosceles triangle with side-lengths $D$, $D$ and $\sqrt{D^2+D+1}$ (and the circumradius $D-1$). On the other hand, if $OCB''$ is obtuse then, according to Lemma 4.5.1, $s(OCB'' )\geq \displaystyle\frac{D^3}{2(D-1)}\sqrt{1-\frac{D^2}{4(D-1)^2}}$. The last assertion of the lemma is straightforward for $D^2\geq 12$. \end{proof} \begin{lemg} \label{Lem4.9} For any value $D^2$ of Class {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}{\rm C} \ on $\bbH_2$ we have that \begin{equation}\label{(4.11)} \qquad\qquad\qquad S^{\rR\rD} (D)=S(D^*),\end{equation} and the equality $S^{\rR\rD} (D)=2s^{\rR\rD} (\triangle )$ is attained only when $\triangle$ is congruent to a $D^*$-triangle $\triangle^*$. Moreover, for each value of $D$ of Class {\rm H}} \def\rM{{\rm M}} \def\rR{{\rm R}{\rm C}, the corresponding {\rm{MRA}}-perfect configuration exists and has type $\alpha (D^*)$. \end{lemg} \begin{proof} By construction, $s^{\rR\rD} (\triangle )\geq s(\triangle )$ for any $D$-admissible ${\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2$-triangle. First, we consider triangles satisfying the conditions of Lemma 4.8. For any such $\triangle$ we have \begin{equation}\label{(4.12)}2s^{\rR\rD} (\triangle )\geq 2s (\triangle )\geq h(D).\end{equation} If $D^2\geq (54)^4$ then, by Lemma 4.7, for any such $\triangle$, \begin{equation} h(D)\geq \frac{\sqrt 3}{2}\big(D^2 + 18{\sqrt D}\big)\geq\frac{{\sqrt 3}(D^*)^2}{2}=2s(\triangle^*),\end{equation} and therefore $s^{\rR\rD} (\triangle )>s(\triangle^*)$. Hence, such a triangle cannot be an MRA-triangle if $D^2\geq (54)^4$. When $D^2< (54)^4$ then, instead of utilizing Lemma 4.7 we verify numerically that, apart from 184 values, every $D^2< (54)^4$ satisfies the bound $\big(D^2 + D/3\big) >(D^*)^2$, which again implies that $s^{\rR\rD} (\triangle )> s(\triangle^*)$, with the help of \eqref{eq:L4.8_1}. Cf. Section \ref{Sec9} and Program 1 {\tt NearestLoschianNumber} in the ancillary file. The non-exceptional $D$ among 184 remaining values are tackled by a separate computer program which calculates $S^{\rR\rD} (D)$ and verifies \eqref{(4.11)}. Cf. Section \ref{Sec9} and Program 2 {\tt SpecialD} in the ancillary file. Next, we discuss the case where we have a non-equilateral triangle $\triangle$ with circumradius $\leq D-1$ and the shortest side-length $\geq D^*$. Here, the inequality $2s(\triangle ) \geq S^{\rR\rD} (D^*)$ is straightforward if $\triangle$ is acute and follows from Lemma 4.5.1 if $\triangle$ is obtuse. Finally, a $D$-admissible triangle with circumradius between $D-1$ and $D+1$ cannot be an MRA-triangle by virtue of an argument similar to the one in the proof of Lemma 4.6. A lower bound $s^{\rR\rD} (\triangle )> s(\triangle^*)$ for such a triangle $\triangle$ is obtained by repeating the proof of Lemma 4.6 where we use analogs of inequalities \eqref{MinAreaS(D)1}, \eqref{MinAreaS(D)2} and \eqref{MinAreaS(D)3} with the value $S(D)$ in the RHS replaced by $S(D^*)$. Namely, in the situation of Lemma {\rm{4.5.2}}, \begin{equation}\label{(4.15)} 2s^{\rR\rD} (\triangle )\geq \frac{3}{2}S(D)-2\delta^2\geq S(D^*)+\frac{\sqrt 3}{2},\end{equation} in case {\rm{(i)}} of Lemma {\rm{4.5.3}}, \begin{equation}\label{(4.16)} 3s^{\rR\rD} (\triangle )\geq 2\left (2S(D) -\frac{D\delta}{2\sqrt 3}\right)+\frac{\sqrt 3}{2}S(D)\geq \frac{3}{2}\left(S(D^*)+\frac{\sqrt 3}{2}\right),\end{equation} and in case {\rm{(ii)}} of Lemma {\rm{4.5.3}}, \begin{equation}\label{(4.17)} 4s^{\rR\rD} (\triangle )\geq 3\left(2S(D)-\frac{D\delta}{2\sqrt 3}\right) +\frac{2}{\sqrt 3}S(D)\geq 2\left(S(D^*)+\frac{\sqrt 3}{2}\right).\end{equation} For $D^2\geq (54)^4$ bounds \eqref{(4.15)}-\eqref{(4.17)} are a consequence of Lemma 4.7. For non-exceptional $D^2< (54)^4$, \eqref{(4.15)} and \eqref{(4.16)} are verified numerically -- cf. Section \ref{Sec9} and Program 1 {\tt NearestLoschianNumber} in the ancillary file -- while second bound in \eqref{(4.17)} follows from the second bound in \eqref{(4.16)}. This implies the desired estimate $s^{\rR\rD} (\triangle )> s(\triangle^*)$ for a $D$-admissible triangle $\triangle$ with circumradius between $D-1$ and $D+1$. Thus, it is established that for any $\triangle$ with circumradius $\leq D+1$ not congruent to $\triangle^*$ we have the bound \begin{equation}\label{sRD>sRD*} \qquad\qquad\qquad s^{\rR\rD} (\triangle )>s^{\rR\rD} (\triangle^*).\end{equation} This leads to the assertions of Lemma 4.9. \end{proof} \subsection{MRA-triangles for Classes HD and HE on $\bbH_2$}\label{SubSec4.5} In this section we establish the values $S^{\rR\rD} (D)$ when $D$ is exceptional and non-sliding. We use the notation $[l^2_0|l^2_1|l^2_2]$, referred to as a triangle type, to indicate a triangle with side-lengths $l_0\leq l_1\leq l_2$. For example, in Figure 7, triangles $AOB$, $AOH$, $HFO$, $CED$ have type $[13|19|21]$ whereas triangles $OBC$, $OCE$, $OFE$ have type $[13|16|21]$. \begin{lemh}\label{Lem4.10} The {\rR\rD}-perfect configurations exist for all exceptional $D^2=$ $13$, $16$, $28$, $49$, $64$, $67$, $97$, $157$, $256$ and are periodic. The corresponding values of $S^{\rR\rD} (D)$ and triangle groups on $\bbH_2$ at which these values are achieved are as follows: \begin{equation}\label{SRDexceptional}\begin{array}{llc} S^{\rR\rD} (\sqrt{13}) = 16.5 {\sqrt 3}/2 &\{[13|16|21],[13|19|21]\} &\beta\\ S^{\rR\rD} (\sqrt{16}) = 20.25 {\sqrt 3}/2 &\{[21|21|21],[16|21|25], &\\ &\;\;[16|21|25],[16|21|25]\}&\gamma\\ S^{\rR\rD} (\sqrt{28}) = 33{\sqrt 3}/2 &\{[28|31|39],[28|37|39]\}&\beta\\ S^{\rR\rD} (\sqrt{49}) =55.5{\sqrt 3}/2 &\{[49|52|63],[49|61|63]\}&\beta\\ S^{\rR\rD} (\sqrt{64}) =72{\sqrt 3}/2 &\{[64|73|81]\}&\beta\\ S^{\rR\rD} (\sqrt{67}) = 75{\sqrt 3}/2 &\{[75|75|75]\}&\alpha (\sqrt{75})\\ S^{\rR\rD} (\sqrt{67}) = 75{\sqrt 3}/2 &\{[67|73|84],[67|79|84]\}&\beta \\ S^{\rR\rD} (\sqrt{97}) = 106.5{\sqrt 3}/2 &\{[97|103|117],[97|112|117]\}&\beta \\ S^{\rR\rD} (\sqrt{157}) = 169.5{\sqrt 3}/2 &\{[157|169|183],[157|172|183]\}&\beta\\ S^{\rR\rD} (\sqrt{256}) = 272.25 {\sqrt 3}/2 &\{[273|273|273],[256|273|289], &\\ &\;\;[256|273|289],[256|273|289]\}&\gamma . \end{array}\end{equation} respectively. For each of these values of $D$, the corresponding {\rm{MRA}}-perfect configuration exists, and its type is listed in the right column. \end{lemh} \begin{proof} The calculation of $S^{\rR\rD} (D)$ involves a finite number of qualified triangles is performed by Program 2 {\tt SpecialD}. Cf. Section \ref{Sec9} and ancillary file. \end{proof} \begin{rc} {\rm Lemma 4.10 indicates that the exceptional values of $D$ emerge when the MRA-triangles are non-unique or non-equilateral. As a result, we have PGSs which are not obtained from max-dense sub-lattices. E.g., for $D^2=67$ we have (i) an MRA-triangle that is a $D$-triangle with $D^2=75$, and (ii) a 2-triangle group formed by non-equilateral MRA-triangles. Another notable case is $D^2=64$ where an MRA-triangle is unique but not equilateral (and forms a group on its own). Here all occupied sites in the $\beta$-PGSs have V-cells of area $72{\sqrt 3}/2$; these V-cells are congruent hexagons. However, there exist ACs where some V-cells (still hexagons) have area $71.5{\sqrt 3}/2$.} $\blacktriangle$ \end{rc} \subsection{Proof of Theorem I}\label{SubSec4.6} \begin{proofI} Owing to Lemma 4.3, Theorem I follows from Lemma 4.6 and 4.9 and 4.10 establishing the existence of RD-perfect configurations for all non-sliding values of $D$. \end{proofI} \begin{rdd} {\rm A corollary of Theorem I is that the particle density in a PGS (per a unit Euclidean area) equals $1/S^{\rR\rD} (D)$.} $\blacktriangle$ \end{rdd} \section{The Peierls bound}\label{Sec5} \subsection{The Peierls bound via MRA-triangles}\label{SubSec5.1} As was said, an application of the PS theory needs a Peierls bound. Here we establish the Peierls bound by using the machinery of MRA-triangles. We again begin with some auxiliary notions and statements. Throughout Section \ref{SubSec5.1} we assume that on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, $D$ takes any attainable value while on $\bbH_2$ the value $D$ is non-sliding (i.e., not from Class HS). Let $\phi^*$ be a saturation of a given $D$-AC $\phi$. If an added occupied site $x\in\phi^*\setminus\phi$ lies in a template then, clearly, this template is incorrect (more precisely, non-${\varphi}$-correct in ${\varphi}$ for each ${\varphi}\in\mathscr P} \def\sR{\mathscr R$). We say that such a template is an {\it s-defect} (in $\phi$). Another possibility for a defect is where, in the saturation $\phi^*$, a template has a non-empty intersection with one of C-triangles that is not an MRA-triangle. We call it a {\it t-defect} (again in $\phi$). Finally, an incorrect template can be simply a neighbor of an s- or a t-defect. We call it an {\it n-defect} (still in $\phi$). Observe that any triangle intersecting the support of the n-defect template is an MRA-triangle. We would like to note that C-triangles considered in Lemmas 4.5.1--4.5.3 lead to t-defects by definition. \begin{lemma} \label{Lem5.1} {\rm{(A Peierls bound in terms of defects)}} Let $D$ be not from Class {\rm{HS}}. Consider a ${\varphi}$-contour ${\Gamma}} \def\gam{{\gamma} =\big({\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} ),\phi\upharpoonright_{{\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )}\big)$ containing $m=\|({\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )\|$ incorrect templates. Additionally, assume that $m = i + j + k$ where $i, j, k$ give the amount of {\rm s}} \def\rt{{\rm t}} \def\ru{{\rm u}-, \rt- and \rn-defects in $\phi$, respectively. Then for the weight $w({\Gamma}} \def\gam{{\gamma} )$ we have that \begin{equation} \qquad\qquad\qquad w({\Gamma}} \def\gam{{\gamma} )\leq u^{-i-j{\sqrt 3}/(32 S^{\rR\rD} (D))}.\end{equation} \end{lemma} \begin{proof} The integer value $i$ contributed by s-defects is straightforward, so we consider the saturation $\phi^*$ and its t-defects only. Observe that, according to Lemmas 4.4, 4.6, 4.9, 4.10, the re-distributed area of any C-triangle that is not an MRA-triangle is at least $\displaystyle\frac{1}{2}\left(S^{\rR\rD} (D)+\frac{\sqrt 3}{2}\del (D)\right)$, where $\del (D) \ge 1/2$. (The overall minimal value $1/2$ for $\del (D)$ is attained, e.g., for $D^2=16$ in Lemma 4.10.) Further, a C-triangle that is not an MRA-triangle can be shared by at most 4 templates. Therefore, $j$ templates with t-defects contain (in the ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$-sense) at least $j/4$ C-triangles that are not MRA-triangles. Consider a torus $\bbT$ formed by an integer number of templates and containing ${\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )$. Then $\bbT$ contains at most $s(\bbT )/S^{\rR\rD} (D)$ C-triangles where $s(\bbT )$ is the area of $\bbT$. On the other hand, the maximal possible amount of C-triangles in $\phi^*\upharpoonright_{\bbT}$ is $\leq\big(s(\bbT )- j{\sqrt 3}/16\big)\big/S^{\rR\rD} (D)$. Next, owing to Lemma \ref{Lem4.1}, the number of particles in $\phi^*\upharpoonright_{\bbT}$ and ${\varphi}\upharpoonright_{\bbT}$ is obtained by dividing the amount of C-triangles by a factor 2. Finally, we can pass from $\bbT$ to ${\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )$ as the number of particles in $\phi^*\upharpoonright_{\bbT\setminus{\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )}$ and ${\varphi}\upharpoonright_{\bbT\setminus{\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )}$ is the same by construction. \end{proof} Informally, Lemma \ref{Lem5.1} states that the increment of `energy' (i.e., decrease in the number of particles) caused by a deviation from a PGS is lower-bounded proportionally to the `size' of the deviation. This is the gist of Peierls bounds used in the Pirogov--Sinai theory and its applications. \begin{lemma} \label{Lem5.2} Let ${\varphi}', {\varphi}''\in{\mathscr P} \def\sR{\mathscr R}(D)$ be two distinct {\rm{PGS}}s. Consider a $D$-AC $\phi$ containing a connected component ${\Lambda}} \def\lam{{\lambda}$ of ${\varphi}'$-correct templates enclosed by a connected component of ${\varphi}''$-correct templates. Then $\phi$ contains a closed chain of adjacent non-{\rm{MRA}} {\rm C}-triangles enclosing \ ${\Lambda}} \def\lam{{\lambda}$. \end{lemma} \begin{proof} On ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and on $\bbH_2$ for $D$ from Classes HA, HB, HC, the MRA-triangles are equilateral. Such triangles from two distinct PGSs cannot share a side in a $D$-AC. For $D$ from Classes HD and HE on $\bbH_2$ the assertion is verified case-by-case. \end{proof} \begin{proofII} The theorem is a direct consequence of Lemmas \ref{Lem5.1} and \ref{Lem5.2} with an additional factor $1/9$ accounting for the possibility for each s- or t-defect to be surrounded by 8 n-defects. \end{proofII} \subsection{A Peierls bound via Voronoi cells}\label{SubSec5.2} An alternative method of establishing the Peierls bound is to use V-cells: it works on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and -- when $3|D^2$ -- on $\bbH_2$ (Classes HA and HB). Thus, from now on until the end of Section \ref{SubSec5.2} we assume that the attainable value $D^2$ is arbitrary on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and is divisible by 3 on $\bbH_2$. Consequently, the PGSs are configurations of type $(D,\alpha )$ (obtained from $D$-sub-lattices). The V-cell method is considerably shorter than the MRD-triangle method but it has a drawback that the obtained Peierls constant $\ovp =\ovp (D)$ is not explicit. It is known \cite{F, Hs} that for any given $D$, a V-cell with the minimal possible area among $D$-ACs $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D,{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2)$ is a perfect hexagon with the side length $D/\sqrt{3}$ and area $S=S(D)$ defined in Eqn \eqref{SoD}. We call it a {\it perfect} V-cell. A $D$-AC $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D,{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2)$ is called V-{\it perfect} if it contains only perfect V-cells. The only V-perfect $D$-AC $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D,{\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2)$ are triangular lattices in ${\mathbb R}} \def\bbS{{\mathbb S}} \def\bbT{{\mathbb T}^2$ with the distance $D$ between neighboring lattice sites (see again \cite{Hs}). \begin{lemma} \label{Lem5.3} For each $D$ there exists a number $\odel =\odel (D, {\mathbb A}} \def\bbD{{\mathbb D}_2/\bbH_2)>0$ such that the area of a non-perfect \rV-cell in any {\rm{AC}} $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$ is $>S + \odel (D)$. \end{lemma} \begin{proof} As follows from \cite{Hs}, to analyze optimal and next-to-optimal V-cells for ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ in $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C$, it suffices to consider sites at distance at most $4D$ from ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ which yields finitely many possibilities of drawing V-cells on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$. There are always an optimal and a next-to-optimal cells as not all possibilities are the same. \end{proof} \begin{ree} {\rm It is precisely the fact that $\odel $ is not determined explicitly that leads to a non-explicit Peierls constant $\ovp$ in Lemma \ref{Lem5.4}.} $\blacktriangle$ \end{ree} Given a basic polygon ${\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}$, a PGS ${\varphi}\in\mathscr P} \def\sR{\mathscr R$ and an AC $\phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} )$, we have the set-theoretical identity \begin{equation}\label{(4)} \qquad\qquad\qquad \bigcup_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in\phi_V}{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})= \bigcup_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in{\varphi}\upharpoonright_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}} {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, {\varphi}\upharpoonright_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}).\end{equation} Therefore, since PGS ${\varphi}$ is an $(D,\alpha )$-configuration, for the partition function \eqref{PartFnctnV} we have that \begin{equation}\label{(5)} \qquad\qquad\qquad {\mathbf Z} ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} ) = \sum_{\phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C ({\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}\|{\varphi} )}\; \prod_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in\phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}} u^{-S^{-1} \left( \left | {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})\right| - S \right) }. \end{equation} Here, and in Lemma \ref{Lem5.4} below, we use the notation $\left| {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z}) \right|$ and $\left| {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}) \right|$ for the area of ${\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi_{\mathbb V}} \def\bbW{{\mathbb W}} \def\bbZ{{\mathbb Z})$ and ${\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma} )$ where, in turn, $\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma} :=\phi\upharpoonright_{{\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} )}$. We also write $ |\rSp\,({\Gamma}} \def\gam{{\gamma} )|$ for the area of $\rSp\,({\Gamma}} \def\gam{{\gamma} )$. Recall, the quantities $\|\rSp\,({\Gamma}} \def\gam{{\gamma} )\|$ and $w({\Gamma}} \def\gam{{\gamma} )$ are defined in Eqns \eqref{NoTs} and \eqref{SWoC}, respectively. \begin{lemma}\label{Lem5.4} {\rm{(A Peierls bound via V-cells)}} There exists a constant $\ovp=\ovp(D) > 0$ such that for any contour \ ${\Gamma}} \def\gam{{\gamma} =({\rm{Supp}}\,({\Gamma}} \def\gam{{\gamma} ),\phi\upharpoonright_{{\Gamma}} \def\gam{{\gamma}})$ we have \begin{equation}\label{(6)} \qquad\qquad\qquad w({\Gamma}} \def\gam{{\gamma} ) = \prod_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}} u^{-S^{-1}\left( \left| {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma} )\right| - S \right)}\leq u^{-\ovp(D)\|\rSp\,({\Gamma}} \def\gam{{\gamma} )\|}.\end{equation} \end{lemma} \begin{proof} The equality in Eqn \eqref{(6)} is simply a re-writing of \eqref{SWoC}. Further, we need to consider sites ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ where $\left| {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}) \right| >S$; otherwise (i.e., when $\left| {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}) \right| =S$) site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ does not contribute into \eqref{(6)}. Observe that $$\hbox{if }\;\left|{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}) \right| - S \ge S\;\hbox{ then }\; \left| {\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}) \right| - S \ge\frac{1}{2} \left|{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma})\right|\,.$$ On the other hand, by Lemma \ref{Lem5.3}, $$\hbox{if }\;\left |{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma})\right| - S < S\;\hbox{ then }\; \left |{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma})\right| - S \ge \odel \ge\frac{\odel}{2S} \left|{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma})\right|\,. $$ According to the definition of a ${\varphi}$-correct template, we have an inequality $$\sum\limits_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma}} \left|{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma})\right|{\mathbf 1}\Big( \left|{\mathcal V}} \def\cW{{\mathcal W} ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\phi\upharpoonright_{\Gamma}} \def\gam{{\gamma})\right| >S \Big) \ge\displaystyle\frac{1}{9 D^2} |\rSp\,({\Gamma}} \def\gam{{\gamma} )|\,.$$ Also, $||\rSp\,({\Gamma}} \def\gam{{\gamma} )|| =\kappa |\rSp\,({\Gamma}} \def\gam{{\gamma} )|$ where $\kappa =1/S^2=4/(3D^4)$. Thus, we can take \begin{equation}\label{(6A)}\displaystyle\ovp (D)=\displaystyle \frac{\kappa}{9 D^2} \min \left(\frac{1}{2}, \frac{\odel}{\sqrt 3} D^{-2}\right).\end{equation} \end{proof} \subsection{Proof of Theorems 1--3 (Classes TA1, TA2, TB), Theorems 7--9 (Classes HA1, HA2, HB), Theorem 11 (Classe HC), Theorems 12 (Classes HD1 and HD2) and 13 (Class HE)}\label{SubSec5.3} Owing to Theorems I and II, the proof of the listed theorems is reduced to an explicit description of the PGS-equivalence classes. The structure of these classes follows from the arithmetic properties of the value $D^2$ to which the conditions of each theorem explicitly refer. Once again, in Theorem 13 we restrict ourselves to the analysis of PGSs only. \section{\bf Proof of Theorems 4--6 and 10}\label{Sec6} \subsection{Dominance for the H-C model}\label{SubSec6.1} Our study of dominance follows an approach developed in \cite{Sl1}, \cite{Za} and \cite{BS}. In particular, we use an appropriate family of {\it small} \ contours (see Definition~1 on page 566 in \cite{Za}) and then compare the {\it free energies} of the corresponding {\it truncated models} to decide which PGS-equivalence class is dominant. In the examples of dominance presented in Theorems 4--6 (Class TB), and 10 (Class HB) we have that all PGSs are $\alpha$-configurations; this fact generates a number of similarities in the analysis of these examples. Let us first give a common summary of our construction. The `smallest' contour in a PGS is generated by the removal of a single particle. The statistical weight of such a contour is $u^{-1}$; we can say that it represents a $u^{-1}$-excitation. The {\it density} of such $u^{-1}$-contours is the same in each of the PGSs ${\varphi}\in\mathscr P} \def\sR{\mathscr R$. Similarly, the removal of two particles at distance $D$ from each other generates a contour of statistical weight $u^{-2}$. Again, the density of such $u^{-2}$-contours is the same in every PGS ${\varphi}\in\mathscr P} \def\sR{\mathscr R$. The next category of a small contour is generated when three particles are removed at the vertices of a $D$-triangle $\triangle$, and one particle is inserted at a site inside $\triangle$. Here the new occupied site should lie at a distance $\ge D$ from any other sub-lattice site. As before, the corresponding contour has statistical weight $u^{-2}$. We can speak of a {\it single insertion} repelling 3 particles from a PGS. Next, we will have to deal with double, triple, and quadruple admissible insertions maintaining the weight $u^{-2}$ for the emerging contour. To stress the latter property, we will often speak of $u^{-2}$-insertions. Figures 16--22 show the structure of $u^{-2}$-insertions for the cases considered in Theorems 4--6 and 10. Double admissible insertions occur when 4 particles are removed from the vertices of a $D$-rhombus formed by two adjacent $D$-triangles $\triangle_1$, $\triangle_2$, and 2 particles are inserted inside $\triangle_1\cup\triangle_2$. (For $D^2=49$ on lattice ${\mathbb A}} \def\bbD{{\mathbb D}_2$ (Theorem 4), the single and double $u^{-2}$ suffice.) Next, triple admissible insertions occur when 5 particles are removed from the boundary of a trapeze formed by three pair-wise adjacent $D$-triangles $\triangle_1$, $\triangle_2$, $\triangle_3$, and 3 particles are inserted inside $\triangle_1\cup\triangle_2\cup\triangle_3$. Finally, quadruple admissible insertions occur when 6 particles are removed from the boundary of an $2D$-triangle by four pair-wise adjacent $D$-triangles $\triangle_1$, $\triangle_2$, $\triangle_3$, $\triangle_4$, and 4 particles are inserted inside $\triangle_1\cup\triangle_2\cup\triangle_3\cup\triangle_4$. Any other contour in the truncated model for the considered examples has statistical weight at most $u^{-3}$. This statement requires a certain effort to verify (including a substantial computer assistance); it is done in Section \ref{Sec7} in the form of Lemmas \ref{Lem7.1}--\ref{Lem7.4}. With Lemmas \ref{Lem7.1}--\ref{Lem7.4} at hand, we can use in a standard way the polymer expansions for the free energies of the truncated models (see, e.g., Sections 1.7, 2.1 in \cite{Za} or Section~3.a in \cite{Se}). This allows us to upper-bound the contribution to these free energies from contours with weight $\leq u^{-3}$ by $cu^{-3}$ where $c > 0$ is an absolute constant. Thus, for $u$ large enough the determination of a dominant class is reduced to the count of densities of single, double, triple and quadruple insertions. \subsection{Proof of Theorem 4}\label{SubSec6.2} For $D^2=49$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$, we have two PGS-equivalence classes (inclined and horizontal); they are determined by inclined $D$-sub-lattices containing sites $(3, 5)$ or $(5 ,3)$ and a horizontal one containing site $(7, 0)$. We will use pairs $(5, 3)$ and $(7, 0)$ for referring to these sub-lattices and their associated PGSs. We want to check that the horizontal $(7, 0)$-PGSs are dominant and the inclined $(5,3)$-PGSs are not. \FigureP16 Any inclined $D$-triangle for $D^2=49$ covers 12 sites where we have a single $u^{-2}$-insertion repelling precisely 3 particles at the vertices of this triangle. In Figure 16 (a) these sites are marked by orange balls, while the repelled sites from a PGS are marked by black balls. The orange balls are placed at sites covered by closed concave circular triangles. On the other hand, open bi-convex lenses indicate positions where an inserted particle repels 4 black balls at the vertices of a $D$-rhombus which yields a $u^{-3}$-insertion. Similarly, any horizontal triangle also covers 12 sites where a single insertion repels 3 particles at the vertices of a $D$-triangle. We use the same legend to mark these possibilities in Figure 16 (c): closed concave circular triangles cover single $u^{-2}$-insertions, open bi-convex lenses indicate positions where an inserted particle repels 4 black balls at the vertices of a $D$-rhombus, yielding an $u^{-3}$-insertion. Thus, both inclined and horizontal PGSs have the same density of single $u^{-2}$-insertions. The small contour which detects a difference is constructed when 4 particles at the vertices of a $D$-rhombus are removed and 2 particles inside the rhombus are inserted, maintaining admissibility. The statistical weight of this contour also equals $u^{-2}$. Figures 16 (b) and 16 (d) show examples of double $u^{-2}$-insertions marked by red. For any inclined $(5,3)$-rhombus there are 6 such pairs of sites. For any $(7,0)$-rhombus there are 7 such pairs. Any other contour in the truncated model for $D^2=49$ has statistical weight at most $u^{-3}$; this is proven in Lemma \ref{Lem7.1} in Section \ref{Sec7}. Therefore, only the horizontal PGS-equivalence class contains the dominant PGSs. $\rule{1ex}{1ex}$\par \subsection{Proof of Theorem 5}\label{SubSec6.3} The argument in the proof of Theorem~5 is similar to that of Theorem~4, except for specific numbers of small contours. Here we distinguish between inclined (8, 7)- and horizontal (13, 0)-PGSs. The first difference with Theorem~4 is in the categories of contours having the statistical weight $u^{-2}$. As before, we have single and double admissible $u^{-2}$-insertions; see Figure 17. \FigureQ17 \FigureR18 In addition, we can place 3 particles in a trapeze and also 4 particles in a $2D$-triangle: see Figure 18. The number of single insertions equals 39 per a triangle or 78 per a $D$-rhombus in both PGS types. However, in the remaining three categories of $u^{-2}$-insertions, the (8, 7)-PGSs dominate distinctively, with 113 vs 78 doubles in a $D$-rhombus, 61 vs 20 triples in a trapeze and 39 vs 3 quadruples in a $2D$-triangle. The enumeration of cases above can be performed manually but we also present a Java routine which automates this task. Cf. Program 3 {\tt CountExcitations} in the ancillary file and Section \ref{Sec9}. The verification that no other contour with statistical weight smaller than $u^{-3}$ exists for the horizontal $D$-sub-lattice is done in Lemma \ref{Lem7.2} in Section \ref{Sec7}; it requires a more massive enumeration than in Lemma \ref{Lem7.1} and therefore relies on a computer-assisted argument. $\rule{1ex}{1ex}$\par \subsection{Proof of Theorem 6}\label{SubSec6.4} The argument for $D^2=147$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ repeats that for $D^2=49,169$ and is again based on an exact count of $u^{-2}$-insertions. Here we distinguish between two PGS types referred to as vertical and (7, 7)- and inclined (11, 2)-PGSs. \FigureS19 Single $u^{-2}$-insertions do not favor any specific PGS: their number equals 34 per triangle or 68 per rhombus in all PGSs. However, the double, triple and and quadruple $u^{-2}$-insertions favor the (7, 7)-PGSs. Viz., the number of double insertions is 86 in a (7, 7)- and 51 in an (11, 2)-rhombus, the number of triple insertions is 39 in a (7, 7)- and 1 in an (11, 2)-triangle, and the number of quadruple insertions is 1 in a (7, 7)- and 0 in an (11, 2)-triangle. Again, this enumeration can be performed manually or by executing the Java routine from Program 3 {\tt CountExcitations}. Cf. Figures 19 and 20. The proof is completed by applying Lemma \ref{Lem7.3} guaranteeing that the $u^{-2}$-insertions are the only ones listed above. $\rule{1ex}{1ex}$\par \FigureT20 We do not know if the contours of weight $u^{-2}$ always suffice to determine dominant PGSs and if a dominant PGS class is always unique. A numerical calculation covering $D^2 \leq 100000$ confirms that there is only one $D$-sub-lattice which dominates in the amount of $u^{-2}$-contours. We conjecture that it is the sub-lattice which has an orange ball in a site at the shortest distance from the triangle vertex among all $D$-sub-lattices. \subsection{Proof of Theorem 10}\label{SubSec6.5} Once more, we follow the established scheme of counting the $u^{-2}$-insertions. As in Theorem 6, we distinguish between the inclined (11, 2)- and vertical (7, 7)-PGSs types, now on $\bbH_2$. The number of vertical PGSs equals $98$ while the number of inclined PGSs is $196$. As before, the $u^{-1}$-contours do not make a distinction. The analysis of dominance focuses on admissible $u^{-2}$-insertions. A vertical (7, 7)-PGS is shown in Figure 21. As earlier, single $u^{-2}$-insertions remove 3 particles at the vertices of a $D$-triangle and add one inside the same triangle. They are again marked by orange balls in frame (a). The number of such insertions is 21 in triangles $OAB$, $OCD$ (and also $BFC$ in frame (b)), and $25$ in triangles $OBC$, $ODE$ (and also $CDG$ in frame (b)). In total, we have $46$ single insertions in each of five rhombuses $OABC$, $OBFC$, $OBDC$, $OCGD$, $OCDE$ featured in frame (b). \FigureU21 Double $u^{-2}$-insertions remove 4 particles at the vertices of a $D$-rhombus and add 2 particles inside the same rhombus. In Figure 21 (b), a double $u^{-2}$-insertion is marked by a red bar. The number of admissible double insertions inside every $D$-rhombus equals $108$. Triple and quadruple admissible $u^{-2}$-insertions for vertical PGSs are also shown in Figure 21 (b). In a triple insertion 5 particles are removed and 3 added (blue balls joined by blue bars), whereas in a quadruple insertion 6 particles are removed and 4 added (green balls joined by tripods of green bars), following the same geometric pattern as before (a trapeze or a $2D$-triangle). In total, we have $63$ triple $u^{-2}$-insertions per a $D$-rhombus. Quadruple admissible $u^{-2}$-insertions cannot occur inside the $2D$-triangle $EBG$. However, for triangle $AFD$ they can occur, and their number equals $9$. Hence, the number of quadruple insertions with the middle point of a tripod inside triangle $OBC$ equals $9$. Thus, the total number of admissible quadruple $u^{-2}$-insertions in a $D$-rhombus is $9$. According to Lemma \ref{Lem7.4}, the list of all admissible $u^{-2}$-insertions is exhausted by the aforementioned possibilities. All-in-all, the above count yields $226$ admissible $u^{-2}$-insertions per a $D$-rhombus in a vertical PGS. \FigureV22 The situation with an inclined (11, 2)-PGSs for $D^2=147$ on $\bbH_2$ is shown in Figure 22. In frame (a) we again put orange balls in positions where a single insertion repels three occupied sites at the vertices of the covering $D$-triangle. The number of such insertions is $24$ in triangles $OAB$ and $OCD$ and $22$ in triangles $OBC$ and $ODE$, with $46$ insertion per a $D$-rhombus. Next, in frame (b) we mark by a red bar an admissible double insertion removing 4 vertices of the covering $D$-rhombus. The number of double $u^{-2}$-insertions is $23$ in all rhombuses in the inclined PGS. Thus, the total amount of double insertions equals $69$ per a $D$-rhombus. As before, triple insertions repelling 5 vertices in an inclined PGS occur when 3 particles are put in a trapeze, one insertion for each involved triangle. The total number of admissible triple $u^{-2}$-insertions per a $D$-rhombus is $3$. Lastly, quadruple insertions repelling 6 vertices could have occurred when 4 particles are put in a $2D$-triangle. However, in an inclined PGS such insertions do not exist. According to Lemma \ref{Lem7.4}, the list of admissible $u^{-2}$-insertions in an inclined PGS is exhausted by the above types. All-in-all, the number of admissible $u^{-2}$-insertions per a $D$-rhombus in an inclined PGS equals $118$. Hence, for $D^2=147$ on $\bbH_2$, the vertical PGS class is dominant, and for $u$ large enough we have $98$ {\rm{EGM}}s generated by the (7, 7)-PGSs. $\rule{1ex}{1ex}$\par \section{\bf Proof of technical assertions from Theorems 4--6, 10}\label{Sec7} In this section we verify that the small contours with a weight $\geq u^{-2}$ which were determined in Section \ref{Sec6} are the only ones possible for the selected values of $D^2$, and any other contour has the statistical weight $\le u^{-3}$. The corresponding statements are Lemmas \ref{Lem7.1}--\ref{Lem7.3} on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ and Lemma \ref{Lem7.4} on $\bbH_2$; the latter, essentially, implied by Lemma \ref{Lem7.3}. The argument is based on the following construction. Without loss of generality we can assume that the underlying PGS ${\varphi}\in\mathscr P} \def\sR{\mathscr R (D)$ has ${\varphi} ({\mathbf 0})=1$ (i.e., ${\varphi} $ is a $D$-sub-lattice in ${\mathbb A}} \def\bbD{{\mathbb D}_2$). Recall, a ${\varphi}$-contour ${\Gamma}} \def\gam{{\gamma}$ can be obtained by adding finitely many particles at some inserted sites, and then removing the particles from ${\varphi}$ which are repelled by the inserted ones (removed sites/particles). The resulting admissible configuration is denoted by $\phi$. One can also remove from $\phi$ any additional particles but such an unforced removal can only decrease the weight $w({\Gamma}} \def\gam{{\gamma} )$ and therefore will be disregarded. As we saw earlier, every inserted site repels from ${\varphi} $ either 3 or 4 removed sites. The inserted sites which repel 3 removed sites are located inside closed concave circular triangles identified in the proof of Theorems 4--6 (orange balls in Figures 16, 17, 19). The complement (in $\mathbb{R}^2$) to these triangles consists of mutually disjoint open circular bi-convex lenses (gray areas in Figures 16, 17, 19). A particle inserted in a lattice site belonging to a lens repels 4 removed sites. Consider a $D$-connected component $\Delta$ of the set of removed sites (together with the corresponding inserted sites). Let $\sharp(\Delta)$ denote the difference between the numbers of removed and inserted sites in $\Delta$. Our goal is to verify that the weight $u^{-\sharp(\Delta)}$ of any such component is at most $u^{-3}$, i.e., $\sharp(\Delta) \ge 3$. Note that a contour support has been defined in Section \ref{SubSec3.1} by using the notion of a template; hence it can include more than one $\Delta$. In that case the statistical weight of the contour is the product of the statistical weights of constituting $D$-connected components, and for our purposes it is enough to estimate the weight of a single component $\Delta$. To evaluate $\sharp (\Delta )$, it is convenient to introduce a {\it total repelling force} $\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})=\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ,\phi )$ acting (in the resulting AC $\phi\in\mathscr A} \def\sB{\mathscr TB} \def\sC{\mathscr C (D)$) upon a removed site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\varphi}$. Such a force is accumulated from all inserted sites $\by_i\in\phi$ that repel site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$: $\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}) = \sum\limits_i \rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by_i, \phi)$. We require that every summand $\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by_i, \phi)$ is non-negative and depends only on the Euclidean distance $\rho({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by_i)$ between ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ and $\by_i$. The square of this distance $\rho({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by_i)^2$ is always a positive integer, and we use a shorthand notation $f_r$ for $\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by, \phi)$, with $r = \rho({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \by)^2\in\bbN$, ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\varphi}$, $\by\in\phi$. With this notation at hand, $\forall$ \ ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\varphi}$, $$\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}):=\sum\limits_{r<D^2,\;\by\in\phi}f_r{\mathbf 1}\Big(\hbox{$\by$ removes ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$, and $r=\rho (\by ,{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})^2$}\Big),\; \hbox{ where }\;f_r\geq 0. \eqno (7.1.{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C})$$ The coefficient $f_r$ is referred to as a {\it local repelling force} at distance ${\sqrt r}$. A dual quantity ${\rm G} (\by) ={\rm G} (\by,\phi )$ represents the{\it total repelling force} generated by an inserted site $\by$: $${\rm G} (\by):=\sum\limits_{r<D^2,\;{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\varphi}}f_r{\mathbf 1}\Big(\rho ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z},\by)^2=r,\;{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\; \hbox{is removed by}\;\by\Big),\;\;\;\by\in\phi .\eqno (7.1.\rB)$$ Our aim is to find $f_r$ such that, for any site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\varphi}$ and any site $\by\in\phi$ removing 3 or 4 sites from ${\varphi}$, $${\rm{(a)}}\quad\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ,\phi )\leq 1,\qquad{\rm{(b)}}\quad{\rm G} (\by,\phi )=1. \eqno (7.2)$$ Owing to (7.2), if the {\it deficit} $\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} )=\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ,\phi )$ of the removed site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ is calculated as $1-\rF ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z})$ then $\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} )\geq 0$, and $$\sharp(\Delta )=\sum_{{\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in{\varphi}}\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}, \phi ) {\mathbf 1}\Big(\hbox{site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ is removed when passing from ${\varphi}$ to $\phi$}\Big).\eqno (7.3)$$ From now on we assume that the configuration $\phi$ has a single $D$-connected component $\Delta$, and the rest of the argument deals with this $\Delta$. Figure 23 shows a fragment of a set $\Delta$ with a collection of inserted and removed sites. \WFigure25 The next observation is that set $\Delta$ consists of {\it internal} sites for which all 6 sub-lattice neighbors also belong to $\Delta$ and {\it boundary} sites which have at least one occupied $D$-sub-lattice neighbor (obviously, not belonging to $\Delta$). Each $D$-connected component of the boundary sites in $\Delta$ defines a closed broken line in $\mathbb{R}^2$, and the set $\Delta$ can be understood as $\mathbb{R}^2$-polygon with the boundary $\partial\Delta$ formed by these broken lines. In general, the boundary $\partial\Delta$ can have several connected components: one external and zero or more internal ones. An ambiguous situation arises when 4 $D$-segments from $\partial\Delta$ meet at the same boundary site (i.e., this site has 2 opposite $D$-neighbors that are occupied). In that case we fictitiously cut this site along the short line segment (of length less than 1) which passes trough this site and has both ends inside $\Delta$ (viewed as an open polygon in $\mathbb{R}^2$). This removes the ambiguity, and the exterior and the interior of $\Delta$ become uniquely defined. It is clear that, as an $\mathbb{R}^2$-polygon, $\Delta$ can only have vertices with angles $\pi/3$, $2\pi/3$ and $4\pi/3$. We say that the corresponding removed sites from $\partial\Delta$ are of type $\pi/3$, $2\pi/3$ and $4\pi/3$ respectively. The remaining sub-lattice sites from $\partial\Delta$ correspond to the angle $\pi$; we say that such a site has type $\pi$. If a vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}\in\partial\Delta$ is repelled only by a single inserted site $\by$ then imagine the particle at $\by$ being deleted. Then vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ also disappears from $\Delta$ (as nothing repels it anymore), and the value $\sharp (\Delta )$ does not increase. (Actually, $\sharp (\Delta)$ remains intact if $\by$ repels a single vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ in $\Delta$.) In Figures 24, 25 we refer to such a site ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}$ as {\it deletable}. \FigureY26 In view of the above definition, every polygon $\Delta$ that can be reduced, by the process of deletion, to an irreducible polygon $\Delta^0$, for which $\sharp (\Delta^0)\leq \sharp (\Delta)$. By definition, a polygon $\Delta$ with a single inserted site is irreducible. The simplest form of $\Delta^0$ is a $D$-triangle with a single inserted site, where $\sharp (\Delta^0)=2$. We would like to: (i) list all $\Delta$s that are reduced to a $D$-triangle (possibly, with the help of a computer), and (ii) demonstrate that for all other irreducible polygons $\Delta^0$, we have $\sharp (\Delta^0)\geq 3$. In fact, the next irreducible case is where $\Delta^0$ is a $D$-rhombus with a single inserted site: it has $\sharp (\Delta^0) \ge 3$, in agreement with property (ii). \FigureZ27 For any other (larger) irreducible polygon $\Delta^0$, the boundary $\partial\Delta^0$ must have (i) no vertex of type $\pi/3$ and (ii) at least 6 vertices of type $2\pi/3$. Each of the latter 6 vertices is repelled by exactly two inserted sites. Our goal in the lemmas below is to find, for the corresponding value of $D^2$, a collection of repelling forces $\{f_r\}$ such that $$\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}) > 1/3\;\hbox{ for any ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in \partial\Delta^0$ of type $2\pi/3$.}\eqno (7.4)$$ This would imply the desired assertions, as $6 \delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z}) > 2$. Let us now pass to specific cases. The proofs of Lemmas \ref{Lem7.1}--\ref{Lem7.4} require a finite enumeration which was done by computer. Cf. Programs 4 {\tt VerifyRepellingForces} and 5 {\tt CountMinDelta} in Section \ref{Sec9} and in the ancillary file. The first case is $D^2=49$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. Define the following family $\{f_r\}$: $$\begin{array}{lllll} f_{1} = 44/56,& f_{3} = 40/56,& f_{4} = 40/56,& f_{7} = 31/56,& f_{9} = 31/56,\\ f_{12} = 22/56,& f_{13} = 22/56,& f_{16} = 17/56,& f_{19} = 17/56,& f_{21} = 17/56,\\ f_{25} = 8/56,& f_{27} = 8/56,& f_{28} = 8/56,& f_{31} = 8/56,& f_{36} = 4/56,\\ f_{37} = 4/56,& f_{39} = 4/56,& f_{43} = 4/56,& f_{48} = 4/56. \end{array}\eqno (7.5)$$ The values $r=$ 1, 3, 4, 7, 9, 12, 13, 16, 19, 21, 25, 27, 28, 31, 36, 37, 39, 43, 48 in (7.5) represent all squared Euclidean distances from ${\mathbf 0}$ to the ${\mathbb A}} \def\bbD{{\mathbb D}_2$-sites within an open $\mathbb{R}^2$-disk of radius~$7$. \begin{lemma}\label{Lem7.1} The family $(7.5)$ gives a collection of local repelling forces for $D^2=49$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ satisfying $(7.2)$ and $(7.4)$, for both horizontal and inclined {\rm{PGS}}s. More precisely, for this collection, $\forall$ irreducible polygon $\Delta^0$ and vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in \partial\Delta^0$ of type $2\pi/3$, $$\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} )\geq 1 - f_{19}- f_{31} = 1 - f_{21}- f_{27} = 31 / 56. \eqno (7.6)$$ \end{lemma} \begin{rff} {\rm For the proof of Theorem 4, it suffices to find a collection $\{f_r\}$ only for inclined PGSs. Cf. Lemma \ref{Lem7.2}. However, it turns out that the family $\{f_r\}$ in (7.5) serves both types of PGSs.} $\blacktriangle$ \end{rff} Next, we deal with $D^2=$169. Here we consider the values $r$ representing the squared Euclidean distances from ${\mathbf 0}$ to all ${\mathbb A}} \def\bbD{{\mathbb D}_2$-sites within an open $\mathbb{R}^2$-disk of radius~$13$. For such values $r$ we set: $$\begin{array}{lllll} f_{1} = 131 / 135,& f_{3} = 127 / 135,& f_{4} = 251 / 270,& f_{7} = 241 / 270,& f_{9} = 116 / 135,\\ f_{12} = 37 / 45,& f_{13} = 221 / 270,& f_{16} = 7 / 9,& f_{19} = 133 / 180,& f_{21} = 383 / 540,\\ f_{25} = 179 / 270,& f_{27} = 19 / 30,& f_{28} = 169 / 270,& f_{31} = 317 / 540,& f_{36} = 281 / 540,\\ f_{37} = 14 / 27,& f_{39} = 131 / 270,& f_{43} = 41 / 90,& f_{48} = 37 / 90,& f_{49} = 43 / 108,\\ f_{52} = 103 / 270,& f_{57} = 35 / 108,& f_{61} = 53 / 180,& f_{63} = 151 / 540,& f_{64} = 5 / 18,\\ f_{67} = 7 / 27,& f_{73} = 119 / 540,& f_{75} = 11 / 54,& f_{76} = 109 / 540,& f_{79} = 11 / 60,\\ f_{81} = 22 / 135,& f_{84} = 83 / 540,& f_{91} = 2 / 15,& f_{93} = 31 / 270,& f_{97} = 29 / 270,\\ f_{100} = 1 / 9,& f_{103} = 4 / 45,& f_{108} = 2 / 27,& f_{109} = 2 / 27,& f_{111} = 7 / 108,\\ f_{112} = 1 / 15,& f_{117} = 2 / 45,& f_{121} = 11 / 270,& f_{124} = 23 / 540,& f_{127} = 11 / 270,\\ f_{129} = 4 / 135,& f_{133} = 4 / 135,& f_{139} = 2 / 135,& f_{144} = 1 / 54,& f_{147} = 2 / 135,\\ f_{148} = 2 / 135,& f_{151} = 1 / 270,& f_{156} = 1 / 270,& f_{157} = 1 / 180& f_{163} = 0. \end{array}\eqno (7.7)$$ \begin{lemma}\label{Lem7.2} The family $(7.7)$ gives a collection of local repelling forces for $D^2=169$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ satisfying $(7.2)$ and $(7.4)$, for horizontal $(13,0)$-{\rm{PGS}}s. More precisely, for this collection, $\forall$ irreducible polygon $\Delta^0$ in a horizontal $(13,0)$-{\rm{PGS}} and vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in \partial\Delta^0$ of type $2\pi/3$, $$\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} )\geq 1 - f_{28} - f_{133} = 93 / 270. \eqno (7.8)$$ \end{lemma} Finally, we consider the example of $D^2=$147 on ${\mathbb A}} \def\bbD{{\mathbb D}_2$. Set: $$\begin{array}{lllll} f_{1} = 24 / 24,& f_{3} = 24 / 24,& f_{4} = 24 / 24,& f_{7} = 23 / 24,& f_{9} = 22 / 24,\\ f_{12} = 21 / 24,& f_{13} = 21 / 24,& f_{16} = 20 / 24,& f_{19} = 19 / 24,& f_{21} = 18 / 24,\\ f_{25} = 16 / 24,& f_{27} = 15 / 24,& f_{28} = 15 / 24,& f_{31} = 14 / 24,& f_{36} = 12 / 24,\\ f_{37} = 12 / 24,& f_{39} = 11 / 24,& f_{43} = 10 / 24,& f_{48} = 9 / 24,& f_{49} = 8 / 24,\\ f_{52} = 7 / 24,& f_{57} = 6 / 24,& f_{61} = 5 / 24,& f_{63} = 4 / 24,& f_{64} = 4 / 24,\\ f_{67} = 4 / 24,& f_{73} = 3 / 24,& f_{75} = 3 / 24,& f_{76} = 2 / 24,& f_{79} = 2 / 24,\\ f_{81} = 2 / 24,& f_{84} = 2 / 24,& f_{91} = 1 / 24,& f_{93} = 1 / 24,& f_{97} = 1 / 24, \end{array}\eqno (7.9{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C})$$ with $$f_r = 0\;\hbox{ for }\;r > 97.\eqno (7.9\rB)$$ This yields a family of values $f_r$ where $r$ represents the squared Euclidean distance from ${\mathbf 0}$ to all ${\mathbb A}} \def\bbD{{\mathbb D}_2$-sites within an open $\mathbb{R}^2$-disk of radius~$\sqrt{147}$. \begin{lemma}\label{Lem7.3} The family $(7.9{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C},\rB)$ gives a collection of local repelling forces $\{f_r\}$ for $D^2=147$ on ${\mathbb A}} \def\bbD{{\mathbb D}_2$ satisfying $(7.2)$ and $(7.4)$, for inclined $(11,2)$-{\rm{PGS}}s. More precisely, for this collection, $\forall$ irreducible polygon $\Delta^0$ in an inclined $(11,2)$-{\rm{PGS}} and vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in \partial\Delta^0$ of type $2\pi/3$, $$\delta ({\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} ) \geq 1 - f_{37} - f_{100} = 1/2. \eqno (7.10)$$ \end{lemma} Next, we extend our analysis to $\bbH_2$. Consider the values $f_r$ given by Eqns (7.9A,B) for $r$ representing the squared Euclidean distance $\leq 147$ between two $\bbH_2$-sites. We call it the $\bbH_2$-projected family (7.9A,B). \begin{lemma}\label{Lem7.4} The $\bbH_2$-projected family $(7.9{\rm A}} \def\rB{{\rm B}} \def{\rm C}{{\rm C},\rB)$ gives a collection of local repelling forces $\{f_r\}$ for $D^2=147$ on $\bbH_2$ satisfying $(7.2)$ and $(7.4)$, for inclined $(11,2)$-{\rm{PGS}}s. More precisely, for this collection, $\forall$ irreducible polygon $\Delta^0$ in an inclined $(11,2)$-{\rm{PGS}} and vertex ${\mathbf x}} \def\by{{\mathbf y}} \def\bz{{\mathbf z} \in \partial\Delta^0$ of type $2\pi/3$, the bound $(7.10)$ holds true. \end{lemma} \begin{rgg} {\rm In essence, the local repelling forces $f_r$ are related to an attempt to improve a Peierls constant for the listed values $D^2=$ 49, 147, 169. In our opinion, this method in its present form can work only for moderate values of $D^2$.} $\blacktriangle$ \end{rgg} \section{A brief note on sliding on $\bbH_2$}\label{Sec8} As was noted, the values $D^2=$ 4, 7, 31, 133 from Class HS exhibit sliding on $\bbH_2$. This is characterized by a cost-free passage from one type of PGSs to another. \Figurea1 For $D^2=$ 4 we have two types of PGSs: (a) one formed by hexagons with side-length 2, and (b) the other formed by $\beta$-configurations. These patterns can be intermittent in a stripe-like fashion, which generates countably many PGSs with no loss in the weight in the course of transition. See Figure 26 (a). Similarly, for $D^2=$ 7 we have the following types of PGSs: (a) an $(\alpha ,D)$-configuration for $D^2=9$, (b) an assortment of $\beta$-configurations of various shapes and orientations. Again, it is possible to combine these patterns and generate countably many PGSs with no loss in the weight in the course of transition. See Figure 26 (b). For $D^2=$ 31, we have a competition between strips formed by $\widetilde} \def\wh{\wideha D$-triangles with ${\widetilde} \def\wh{\wideha D}^2=36$ and triangles with squared side-lengths 31, 36, 43. Such triangles share common sides and have equal areas. Again, the separation line does not incur a loss in weight. Similarly, for $D^2=$ 133, we have a competition between strips formed by $\widetilde} \def\wh{\wideha D$-triangles with ${\widetilde} \def\wh{\wideha D}^2=144$ and triangles with squared side-lengths 133, 144, 157. Such triangles again share a common side and have equal areas. As earlier, the separation line does not incur any loss in weight. A pattern resembling that for $D^2=$ 31, 133 is typical for sliding on $\bbZ^2$; cf. \cite{MSS1}. \Figureb2 \section{Comments on computer programs used in the proofs}\label{Sec9} The ancillary file to this paper contains five Java programs and their outputs. These programs are used to (a) assist the proof of Lemmas 4.9, 4.10, (b) identify distinguishing small contours (admissible $u^{-2}$-insertions) for the values $D^2=49$, $D=169$ and $D^2=147$ and (c) assist the proof of Lemmas \ref{Lem7.1}--\ref{Lem7.4}. Lemmas 4.9 and 4.10 are parts of the proof of Theorem I(ii) (in the part concerning Classes HD and HE) and Theorems 12 and 13. Lemmas \ref{Lem7.1}--\ref{Lem7.4} are parts of the proof of Theorems 4--6 and 10, for $D^2=49$, $D=169$ and $D^2=147$. The routines can be executed on any computer hosting Java Development Kit version 1.4 and later. The routine from Program 5 requires 3GB of RAM while other routines are not resource hungry. Executions of all routines take from few seconds to 60 minutes. Program 1 {\tt NearestLoschianNumber} is a routine working on $\bbH_2$. It checks that, apart from 184 values, for each $D^2< (54)^4$ such that $D^2$ is not divisible by 3, (i) the inequality involving the RHS of Eqn \eqref{eq:L4.8_1} holds true: $\displaystyle\frac{\sqrt 3}{2}D^2 + \frac{D}{2{\sqrt 3}}>\frac{{\sqrt 3}(D^*)^2}{2}=2s (\triangle^* )$, (ii) inequalities \eqref{(4.15)}, \eqref{(4.16)} are satisfied. It leads to the conclusion that for each $D^2< (54)^4$, apart from the above 184 values, if $D^2$ is not divisible by 3 then this value $D^2$ belongs to Class HC: the PGSs are $(\alpha ,D^*)$-configurations, where $D^*>D$ is the nearest L\"oschian number such that $3|(D^*)^2$. This assists the proof of Lemmas 4.9 and 4.10. Program 2 {\tt SpecialD} is a routine analyzing the 184 values $D^2$ detected by Program 1. It specifies the values forming Classes HD, HE and HS on $\bbH_2$. This routine (i) extracts the exceptional values $D^2=$ 4, 7, 13, 16, 28, 31, 49, 64, 67, 97, 133, 157, 256 (Classes HD, HE, HS) and identifies the PGSs for the exceptional non-sliding $D^2\neq$ 4, 7, 31, 133, (ii) checks that each $D^2$ among the above 184 values which is not from Classes HD, HE, HS belongs to Class HC. Again, this assists the proof of Lemmas 4.9 and 4.10. Program 3 {\tt CountExcitations} is a routine calculating, for a given sub-lattice in ${\mathbb A}} \def\bbD{{\mathbb D}_2$ or $\bbH_2$, the amount of distinguishing small contours (admissible $u^{-2}$-insertions) of the types used in the proofs of Theorems 4--6, 10. The execution results are presented only for $D^2=49$, $D=169$ and $D^2=147$ and the sub-lattices used in the proof of Lemmas \ref{Lem7.1}--\ref{Lem7.4}. Program 4 {\tt VerifyRepellingForces} is a routine that verifies, for $D^2=$ 49, 169, 147, and a family of local repelling forces $\{f_r\}$, if Eqn (7.2) is satisfied. Program 5 {\tt CountMinDelta} is a routine that checks inequality (7.4), for a given sub-lattice and a family of local repelling forces, for $D^2=49$, $D=169$ and $D^2=147$. Programs 4, 5 assist the proof of Lemmas \ref{Lem7.1}--\ref{Lem7.4}. \vskip 0.5cm \noindent {\bf Acknowledgement} IS and YS thank the Math Department, Penn State University, for hospitality and support. YS thanks St John's College, Cambridge, for long-term support. \end{document}
arXiv
Iberian pig mesenchymal stem/stromal cells from dermal skin, abdominal and subcutaneous adipose tissues, and peripheral blood: in vitro characterization and migratory properties in inflammation Alexandra Calle1, Clara Barrajón-Masa1, Ernesto Gómez-Fidalgo1, Mercedes Martín-Lluch1, Paloma Cruz-Vigo1, Raúl Sánchez-Sánchez1 & Miguel Ángel Ramírez ORCID: orcid.org/0000-0002-5868-21341 Recently, the capacity of mesenchymal stem/stromal cells (MSCs) to migrate into damaged tissues has been reported. For MSCs to be a promising tool for tissue engineering and cell and gene therapy, it is essential to know their migration ability according to their tissue of origin. However, little is known about the molecular mechanisms regulating porcine MSC chemotaxis. The aim of this study was to examine the migratory properties in an inflammatory environment of porcine MSC lines from different tissue origins: subcutaneous adipose tissue (SCA-MSCs), abdominal adipose tissue (AA-MSCs), dermal skin tissue (DS-MSCs) and peripheral blood (PB-MSCs). SCA-MSCs, AA-MSCs, DS-MSCs and PB-MSCs were isolated and analyzed in terms of morphological features, alkaline phosphatase activity, expression of cell surface and intracellular markers of pluripotency, proliferation, in vitro chondrogenic, osteogenic and adipogenic differentiation capacities, as well as their ability to migrate in response to inflammatory cytokines. SCA-MSCs, AA-MSCs, DS-MSCs and PB-MSCs were isolated and showed plastic adhesion with a fibroblast-like morphology. All MSC lines were positive for CD44, CD105, CD90 and vimentin, characteristic markers of MSCs. The cytokeratin marker was also detected in DS-MSCs. No expression of MHCII or CD34 was detected in any of the four types of MSC. In terms of pluripotency features, all MSC lines expressed POU5F1 and showed alkaline phosphatase activity. SCA-MSCs had a higher growth rate compared to the rest of the cell lines, while the AA-MSC cell line had a longer population doubling time. All MSC lines cultured under adipogenic, chondrogenic and osteogenic conditions showed differentiation capacity to the previously mentioned mesodermal lineages. All MSC lines showed migration ability in an agarose drop assay. DS-MSCs migrated greater distances than the rest of the cell lines both in nonstimulated conditions and in the presence of the inflammatory cytokines TNF-α and IL-1β. SCA-MSCs and DS-MSCs increased their migration capacity in the presence of IL-1β as compared to PBS control. This study describes the isolation and characterization of porcine cell lines from different tissue origin, with clear MSC properties. We show for the first time a comparative study of the migration capacity induced by inflammatory mediators of porcine MSCs of different tissue origin. Mesenchymal progenitors are a group of adult multipotential stem cells that were first characterized in 1976 by Friedenstein, who isolated them from bone marrow and described them as adherent cells with fibroblastoid morphology, able to differentiate into cells of mesodermal origin such as osteocytes, chondrocytes or adipocytes [1]. Thus, mesenchymal stem cells, also referred to as multipotent stromal cells or mesenchymal stromal cells (MSCs) [2, 3], are multipotent cells with significant clinical importance because of their applicability in cell therapy for regenerative medicine and tissue engineering [4]. In addition, various studies have demonstrated that MSCs are strongly immunosuppressive both in vitro and in vivo [5,6,7,8,9,10], being able to reduce graft-versus-host disease associated with allografts and xenografts [11]. In 2006, with the aim of standardization, the International Society for Cellular Therapy proposed three criteria to define the minimal characteristics of MSCs [12]: when maintained in standard culture conditions using tissue culture flasks, they should display plastic adherence; more than 95% of the MSC population should express specific markers such as CD105, CD73 and CD90, and be negative for CD45, CD34, CD14 or CD11b, CD79α or CD19 and HLA class II; and they should be able to differentiate to osteoblasts, adipocytes or chondroblasts in vitro under standard differentiating conditions as demonstrated by specific staining of in vitro cell cultures. The use of MSCs in regenerative medicine in human and animals is increasing as their characteristics of self-renewal, proliferative capacity and differentiation potential are becoming better controlled. In addition, the ISCT criteria do not guarantee the purification of homogeneous populations of MSCs, and in fact the isolation of MSCs with ISCT criteria produces nonclonal and heterogeneous cultures of stromal cells, stem cells, progenitor cells and differentiated cells [13]. Previously, many experimental animals such as mouse, rat, and rabbit have been tested as models for clinical applications; however, the importance of pigs has been highlighted as the best experimental model, based on the similarities of porcine organ physiology with human beings [14]. Pigs are currently the animal model of choice for evaluation of stem cell-based therapy, regenerative medicine and transplantation [15]. Within pigs, there are genetic differences among pig subspecies [16] and Iberian pigs are at risk for obesity and cardiometabolic diseases in case of an excess of nutrients, a risk reported either at juvenile development or at adulthood [17]. Thus, Iberian breeding sows are highly sensitive to nutritional and metabolic changes, much more than lean breeds [18]. For all these reasons according to its similarity with human obesity and metabolic diseases, the Iberian pig has been proven particularly valuable as a biomedical-research animal model for human investigation. Besides, in terms of animal production, the Iberian pig stimulates important economic interest in the ambit of livestock. Indeed, the Iberian pig is known worldwide for the production of a unique highly priced drycured product, Iberian ham, with a unique taste due to its abundance in intramuscular fat. In fact, the Iberian pig has a high potential for fat accumulation under its skin and among the muscular fibers [19]. Generation of specific porcine cell lines will help in a variety of experimental research and in understanding stem cell xenotransplantation safety in an excellent animal model. MSCs have been described in different porcine tissues, exhibiting the aforementioned stem cell properties like plastic adherence, multilineage differentiation capacity, expression of MSC markers and pluripotent genes. It is clearly evident that postnatal organs and tissues serve as good MSC sources; however, each source of MSCs has a different extent of differentiation potential and expression of a different combination of stem cell-related markers and other important features like high proliferation, immunomodulation and xenotransplantation ability. Therefore, suitable MSCs should be carefully validated for cell-based therapies before clinical application. One of the most remarkable but least understood findings is the ability of human MSCs to migrate from bone marrow or peripheral blood into damaged tissues. Transplantation experiments in animals and patients demonstrated that MSCs migrate to sites of injury, where they enhance wound healing [20], support tissue regeneration following myocardial infarction [21], home to and promote the restoration of the bone marrow microenvironment after damage by myeloablative chemotherapy [22] or help to overcome the molecular defect in children with osteogenesis imperfecta [23]. Although Almalki et al. [24] have recently reported porcine abdominal adipose tissue MSC (AA-MSC) migration ability mediated by cytokines, little is known about the molecular mechanisms regulating cell movement and relocalization in porcine MSCs. For MSCs to be a promising tool for tissue engineering and cell and gene therapy strategies, it is essential to know their migration ability according to their tissue of origin. The most obvious disadvantages of the majority of tissular sources of MSCs described so far are the invasiveness of the harvesting procedure. An excellent alternative source of cells is blood, such as umbilical cord blood collected at birth or peripheral blood (PB) from adult animals. Given that such blood samples can be readily taken in a sterile manner, they may provide a readily accessible source of autologous MSCs for regenerative therapies. In order to standardize the promising results of such therapy, it is essential that well-characterized and homogeneous MSC populations be used. Currently, MSCs have been isolated from peripheral blood (PB-MSCs) of human, mice, sheep, horse, dog, cat, rat, rabbit and pig [7, 25,26,27,28,29,30]. Despite this trend, basic information regarding pig PB-MSCs is still limited. Isolation, culture and karyotyping analysis of MSCs Abdominal adipose tissue, subcutaneous adipose tissue and dermal skin were obtained post mortem from an adult Iberian boar. Previously, a blood sample was harvested from the jugular vein (5 ml) using heparin vacutainer tubes. The collected samples for isolation and culture of AA-MSCs, SCA-MSCs and DS-MSCs were rinsed several times with water and washed three times with Hank's Balanced Salt Solution (HBSS) supplemented with 500 U/ml penicillin, 500 mg/ml streptomycin and 0.1% bovine serum albumin (BSA) (Merck KGaA, Darmstadt, Germany). Adipose and dermal skin tissues were minced using sterile scissors to enhance collagenase type II (Gibco by Life Technologies, Grand Island, NY, USA) action. Minced tissues were incubated in a collagenase type II solution—HBSS supplemented with 0.05% collagenase type II, 0.1% BSA and 30 nM CaCl2—during 45 min at 37 °C, shaking gently every 5 min. Thereafter, a volume of culture medium—Dulbecco's modified Eagle's medium low glucose (DMEM-LG) (Hyclone Laboratories, UT, USA), supplemented with 15% fetal calf serum (PAA Laboratories, Austria), 2% nonessential amino acids and antibiotics (100 U/ml penicillin, 100 mg/ml streptomycin)—was added to block the action of collagenase and the obtained suspension centrifuged at 300 × g for 5 min. The resulting pellets were resuspended in culture medium and plated in a 100-mm2 tissue culture dish (JetBiofil, Guangzhou, China) and incubated in an atmosphere of humidified air and 5% CO2 at 37 °C. Culture medium was changed every 48–72 h. Isolated colonies of putative MSCs were apparent after 6–8 days in culture and were maintained in growth medium until ~ 75% confluence. The cells were then treated with 0.05% trypsin–EDTA (T/E) and further cultured for subsequent passage in 100-mm2 dishes at 50,000 cells/cm2. To isolate peripheral blood-derived mononuclear cells, phosphate buffered saline (PBS) 1:1 diluted blood (5 ml) was layered onto 10 ml Biocoll separating solution (Biochrom AG, Germany) in a 100-ml tube and centrifuged at 1600 × g for 20 min. The mononuclear cells were collected from the interphase, washed twice with PBS by centrifugation at 3000 × g for 15 min and then suspended in DMEM-LG supplemented with 10% FCS, 2 mM glutamine, 1 mM MEM nonessential amino acid solution and antibiotics (100 U/ml penicillin, 100 mg/ml streptomycin). Cells obtained from each 30 ml of blood were seeded onto a 100-mm2 tissue culture dish and incubated in an atmosphere of humidified air and 5% CO2 at 37 °C. Nonadherent cells were removed by washing twice with PBS after 48 h of incubation and fresh complete medium was then added to the dishes. Thereafter, the medium was changed every 48–72 h and split at ~ 75% confluence as before. The MSC chromosome preparation was carried out following the procedures of Rodríguez et al. [31] with minor modifications. Briefly, cells were incubated with 0.1 μg/ml colcemid (Gibco) for 60 min in a humidified incubator (5% CO2, 37 °C) and then detached. The pelleted cells were incubated in 5 ml of hypotonic solution (0.057 M KCl) for 10 min at room temperature followed by fixation with methanol/glacial acetic acid (3:1) solution. Fixed cells were dropped on wet slides and air-dried overnight at 60 °C to obtain a GTL-banding chromosome pattern. Leishman solution for GTL-banding was carried out and metaphases were fully karyotyped under a Nikon Eclipse E400 microscope. Images were then captured with a digital camera IAI® Progressive scan using Cytovision Genus® software. Inmunocytochemical analysis by flow cytometry Surface, cytoplasmic and nuclear cell antigens were examined by flow cytometry using a Cell Lab Quanta SC system from Beckman Coulter. Cell cultures at 80–90% confluence were detached using T/E solution, collected and fixed with 4% paraformaldehyde for 10 min and subsequently washed twice with PBS. For analysis of the expression of vimentin (clone LN-6; Sigma-Aldrich), cytokeratin (Clone C-11; Sigma-Aldrich) (cytoplasmic proteins) and POU5F1 (rabbit polyclonal; Biorbyt) (a nuclear protein), cell permeabilization was performed by incubation with 0.3–0.5% Triton X-100 for 10 min and washing with PBS. Nonspecific binding of the antibodies was blocked with TNB-blocking solution during 30 min at 37 °C. Appropriate dilutions, provided by manufacturers, of primary antibodies against the markers commonly used to define MSCs—vimentin (clone LN-6; Sigma-Aldrich), CD44 (clone IM7; Bio-rad), CD105 (clone MEM-229; Abcam) and CD90 (clone 5E10; Abcam) as positive markers, cytokeratin, CD34 (rabbit polyclonal; Biorbyt) and MHCII (clone CVS20; Bio-Rad) as negative markers and POU5F1 as a pluripotency marker—were added to the cells and incubated overnight at 4 °C. Cells were then stained with the appropriated Alexa fluor 488-conjugated secondary antibodies (Jackson InmunoResearch Laboratories, West Grove, PA, USA). Negative control samples were obtained by omission of the primary antibody. Analysis of the samples was performed with Cell Lab Quanta SC system from Beckman Coulter using Flow-Jo X SOFTWARE® version 10.0.7r2. Alkaline phosphatase activity AA-MSC, SCA-MSC, DS-MSC and PB-MSC lines at passages 10–15 were grown on 35-mm dishes (JetBiofil, Guangzhou, China) for 2 weeks. Cells were washed twice with PBS and fixed with a solution of 4% paraformaldehyde during 10 min at room temperature. Paraformaldehyde was aspirated and the plates were washed twice with distilled water and covered with Solution B (1 ml of Solution A (Fast Red 1 mg/ml), 1.6 μl of Napthol AS-mx phosphate and 40 μl Tris–HCl 1 M, pH 8.6) during 10–15 min at room temperature in the dark. Solution B was finally removed and the cells were washed twice with PBS and covered with PBS to prevent drying. The colonies were examined for appearance of pink/red coloration indicating alkaline phosphatase (AP) activity. The stained colonies were imaged using an inverted Nikon Diaphot phase-contrast microscope coupled to a Jenoptik ProgRes CT1 digital camera. Images were captured using ProgRes capture pro software version 2.7 (Jenoptik Laser, Optic Systeme GmbH). Cell proliferation measurement The different mesenchymal cell lines at passages 9–11 were seeded at 2 × 105 cells per 60-mm tissue culture plates (JetBiofil, Guangzhou, China). The culture medium was changed every 2 days. At each time point a duplicate of plates were detached by tripsinization and counted using a Bürker counting chamber (Paul Marienfeld GmbH & Co., Lauda-Königshofen, Germany). Then, 20 μl of cell suspension was placed in both sides of the chamber and viewed using 100× magnification under an inverted Nikon Diaphot phase-contrast microscope coupled to a Jenoptik ProgRes CT1 digital camera. Images were captured using ProgRes capture pro software version 2.7 (Jenoptik Laser, Optic Systeme GmbH). A total of 5 × 1 mm2 squares per sample were analyzed and the number of cells per milliliter was determined according to the equation: $$ \mathrm{Number}\ \mathrm{of}\ \mathrm{cells}\ \mathrm{in}\ 1\ \mathrm{ml}=\mathrm{N}/\mathrm{Z}\times \mathrm{dilution}\times {10}^4, $$ where N = the whole number of cells counted and Z = the number of counted squares. The cell population doubling time (PDT) was calculated using Roth V. 2006 Doubling Time Computing (available from http://www.doubling-time.com/compute.php). In vitro differentiation potential assay AA-MSC, SCA-MSC, DS-MSC and PB-MSC lines at passages 10–15 were grown until 90% confluence on a 12-well/24-well multidish (JetBiofil, Guangzhou, China). For adipogenic differentiation, the StemPro® Adipogenesis Differentiation Kit (Thermo Fisher Scientific, Rockford, IL, USA) was used according to the manufacturer's instructions. Differentiating media were changed every 2–3 days for 14 days. Simultaneously, control cells were cultured in standard conditions. Cells were then fixed in 4% paraformaldehyde solution for 10–15 min. After fixation, cells were incubated for 5 min in 60% isopropanol and stained with Oil red O (Merck KGaA, Darmstadt, Germany) solution to visualize the accumulation of red lipid droplets. Cells were photographed using a Nikon Diaphot light microscope coupled to a Canon EOS 500D digital camera. For osteogenic differentiation, the StemPro® Osteogenesis Differentiation Kit (Thermo Fisher Scientific) was used according to the manufacturer's instructions. Differentiating media were changed every 3–4 days for 21 days. Simultaneously, control cells were cultured in standard conditions. Cells were then fixed in 4% paraformaldehyde solution for 30 min. After fixation, cells were incubated for 2–3 min in 2% Alizarin Red S solution (pH 4.2) to visualize the calcium deposits. For chondrogenic differentiation, the StemPro® Chondrogenesis Differentiation Kit (Thermo Fisher Scientific) was used according to the manufacturer's instructions. Differentiating media were changed every 2–3 days for 14 days. Simultaneously, control cells were cultured in standard conditions. Cells were then fixed in 4% paraformaldehyde solution for 30 min. After fixation, cells were incubated for 30 min with 1% Alcian Blue solution prepared in 0.1 N HCl. Blue staining was corresponding with proteoglycans synthetized by chondrocytes. Cells under ostegenesis and chondrogenesis differentiation conditions were photographed using a Motic SMZ-171 stereomicroscope coupled to a Moticam BTU8 digital camera. Cell migration measurement: agarose spot assay The cell migration measurement by agarose spot assay was carried out following the procedures of Wiggins and Rappoport [32] with minor modifications. Briefly, PBS–0.5% agarose solution was heated on a water bath until boiling to facilitate complete dissolution. When the temperature cooled down to 40 °C, 90 μl of agarose solution was pipetted into a 1.5-ml Eppendorf tube containing 10 μl of PBS or PBS supplemented with TNF-α or IL-1β for a final concentration of 6 nM [33]. Then, 5-μl spots of agarose-containing PBS, TNF-α or IL-1β were pipetted onto six-well plates (JetBiofil, Guangzhou, China), 16 drops per well, 12 drops per MSC line, and allowed to cool for 15 min at 4 °C. At this point, cells that had been treated with C-Mitomycin 1 μg/ml overnight (Merck KGaA, Darmstadt, Germany) to avoid cellular duplication were plated onto spot-containing dishes in the presence of culture media. Imaging was performed at 24 and 48 h using a Motic SMZ-171 stereomicroscope coupled to a Moticam BTU8 digital camera and Motic Image Plus software version 2.0 (Motic China Group Co., Ltd). Motile cells penetrated the agarose spot. The longest straight distance from the border of the spot was analyzed for each cell using Image J. Statistical analysis was performed using GraphPad Prism 6 (GraphPad Software, La Jolla, CA, USA). One-way ANOVA for multiple comparisons by Fisher's LSD tests was used for cell proliferation and doubling time. Two-way ANOVA for multiple comparisons by Fisher's LSD tests was used for cell migration. Values are expressed as mean ± standard error of the mean (SEM). Differences were considered to be significant when p < 0.05. Morphological features and chromosomal stability As shown in Fig. 1, we could successfully isolate MSCs from abdominal adipose tissue, subcutaneous adipose tissue, dermal skin and peripheral blood of an adult male Iberian pig. In primary culture, MSCs of all four sources adhered to the plastic surface of culture dishes, exhibiting a mixture of round, spindle or elongated shape morphologies (Fig. 1a). However, after the first cell passage, cells formed a homogeneous population of fibroblast-like adherent cells (Fig. 1b). Morphology of MSCs at (a) passage 0 and 8 days of culture and (b) first passage and 13 days of culture. Phase-contrast images acquired with 100× magnification. Bars = 70 μm. (c) Representative P10 metaphase and karyotype. No chromosomal aberrations observed in AA-MSCs after long-term cultivation. AA-MSC abdominal adipose tissue mesenchymal stem/stromal cell, DS-MSC dermal skin tissue mesenchymal stem/stromal cell, PB-MSC peripheral blood mesenchymal stem/stromal cell, SCA-MSC subcutaneous adipose tissue mesenchymal stem/stromal cell To analyze the chromosomal stability of MSCs during in vitro culture, the AA-MSC line expanded through 10 passages was used for GTL-banding. No chromosomal translocation, deletion or extra-chromosome was observed (Fig. 1c). Expression of cell surface, intracellular and pluripotency markers Expression of MSC markers has been reported to differ in porcine MSCs from different tissue origin [34]. For further characterization of all four types of MSCs, some characteristic cell surface and intracellular markers were assessed by flow cytometry (Fig. 2). All cell types were positive for cell surface expression of CD44, CD105, CD90 and the cytoplasmic marker vimentin, characteristic of MSCs. Interestingly, the cytoplasmic marker cytokeratin, typically from epithelium of ectoderm and endoderm, commonly used as a negative marker of MSCs, could also be detected in DS-MSCs. No expression of immune-phenotype markers, such as MHCII or CD34, was detected in any of the four lines of MSCs (Fig. 2). Analysis by flow cytometry of expression levels of cell surface markers CD34, CD44, CD105, CD90 and MHCII and intracellular markers cytokeratin, vimentin and POU5F1 in AA-MSCs, DS-MSCs, SCA-MSCs and PB-MSCs. Data correspond to mean fluorescence intensity (fold of negative control) for each sample. AA-MSC abdominal adipose tissue mesenchymal stem/stromal cell, DS-MSC dermal skin tissue mesenchymal stem/stromal cell, MHCII major histocompatibility complex II, PB-MSC peripheral blood mesenchymal stem/stromal cell, POU5F1 POU class 5 homeobox 1, SCA-MSC subcutaneous adipose tissue mesenchymal stem/stromal cell MSC lines were analyzed for pluripotency features. All MSC lines were positive for the nuclear marker POU5F1 (Fig. 2), and stained positive for alkaline phosphatase (Fig. 3). The lowest level of alkaline phosphatase activity was observed in DS-MSCs. Analysis of alkaline phosphatase (AP) activity. Bright-field images obtained at 100× (a) or 32× (b) magnification, showing some red-stained cell groups after action of alkaline phosphatase on Fast Red in presence of Napthol AS-mx phosphate. Bars = 70 μm (top panels) and 150 μm (bottom panels). AA-MSC abdominal adipose tissue mesenchymal stem/stromal cell, DS-MSC dermal skin tissue mesenchymal stem/stromal cell, PB-MSC peripheral blood mesenchymal stem/stromal cell, SCA-MSC subcutaneous adipose tissue mesenchymal stem/stromal cell Proliferation capacity To analyze the cell proliferation capacity of MSCs, the number of cells/dish was counted for each cell line at days 3, 4, 5, 7 and 11, starting in all cases from an initial seeding of 2 × 105 cells. As shown in Fig. 4A the number of cells increased for all cell lines along the entire assay. On day 11, the 60-mm culture plate contained the following total number of cells: for the most proliferative line SCA-MSCs, 316.8 × 104 ± 30.9 × 104 cells; for DS-MSCs, 294.3 × 104 ± 47.4 × 104 cells; for AA-MSCs, 217.2 × 104 ± 45.3 × 10 4 cells; while PB-MSCs, with a significantly lower proliferation rate throughout the experiment, presented 154.5 × 104 ± 30.9 × 104 cells. In vitro proliferation of MSCs. (a) Absolute number of cells/dish (mean ± SD). (b) Doubling time of each MSC line (mean ± SD). Different lowercase letters indicate significant differences (p < 0.05). AA-MSC abdominal adipose tissue mesenchymal stem/stromal cell, DS-MSC dermal skin tissue mesenchymal stem/stromal cell, MSC mesenchymal stem/stromal cell, PB-MSC peripheral blood mesenchymal stem/stromal cell, SCA-MSC subcutaneous adipose tissue mesenchymal stem/stromal cell Figure 4b shows the proliferation rate of MSCs in terms of the population doubling time (PDT). On day 11, AA-MSCs showed a significantly higher PDT (8.4 ± 1.4 days) than the rest of the MSC lines (DS-MSCs 5.9 ± 1.8 days, SCA-MSCs 5.4 ± 3.6 days and PB-MSCs 4.6 ± 1.5 days). In vitro differentiation of MSCs As shown in Fig. 5, all MSC lines cultured under adipogenic or osteogenic conditions presented cytoplasmic lipid droplets or distinctive calcium deposits, respectively. A comparable amount of cytoplasmic lipid droplets was observed in all MSCs while the staining pattern of calcium deposits was strongest in DS-MSCs and PB-MSCs, indicating a high potential for differentiation of these lines. Cells cultured under chondrogenic conditions showed the presence of acidic proteoglycan that was demonstrated at monolayer cells by Alcian blue staining. Besides, AA-MSCs presented stained nodules typical from cartilaginous tissue phenotype. In vitro differentiation of MSCs to different lineages. Images show Oil red O staining of lipid droplets in cells cultured in basal medium (Control) or in adipogenic differentiation medium (top panel); Alcian blue staining of acidic proteoglycan in cells cultured in basal medium (Control) or in chondrogenic differentiation medium (middle panels); and Alizarin Red S staining of calcium deposits in cells cultured in basal medium (Control) or in osteogenic differentiation medium (bottom panels). Bright-field images acquired with 200× magnification (bars = 70 μm) for top panels and 3× magnification (bars = 150 μm) for middle and bottom panels. AA-MSC abdominal adipose tissue mesenchymal stem/stromal cell, DS-MSC dermal skin tissue mesenchymal stem/stromal cell, PB-MSC peripheral blood mesenchymal stem/stromal cell, SCA-MSC subcutaneous adipose tissue mesenchymal stem/stromal cell Migration ability of MSC lines Assessment of the invasion capacity of all MSC lines was performed using the agarose spot assay [32] with minor modifications. This assay allows the measurement of cell invasion by analyzing the crawling of the cells underneath an agarose gel on a planar surface (Fig. 6). All MSC lines showed migration capacity in the agarose drop test at 48 h. DS-MSCs migrated greater distances than the rest of the cell lines in both unstimulated conditions and in the presence of the inflammatory cytokines TNF-α and IL-1β (Fig. 7, a–c). Representative images of AA-MSC migration assay into PBS, TNF-α or IL-1β-agarose spot after 48 h. Images obtained in a light stereomicroscope at 20× magnification. IL-1β interleukin-1β, PBS phosphate buffered saline, TNF-α tumor necrosis factor alpha Migration analysis in agarose spot assay. Distance migrated from border of agarose spot measured in two independent experiments for AA-MSCs, SCA-MSCs, DS-MSCs and PB-MSCs at 48 h (mean ± SD). Different lowercase letters indicate significant differences (p < 0.05 for MSC migration mediated by PBS (a, b, c) and TNF-α (f, g, h); p < 0.005 for MSC migration mediated by IL-1β (j, k, l)). *p < 0.05; **p < 0.005; ***p < 0.0005. AA abdominal adipose tissue, DS dermal skin tissue, IL interleukin, MSC mesenchymal stem/stromal cell, PB peripheral blood, PBS phosphate buffered saline, SCA subcutaneous adipose tissue, TNF tumor necrosis factor SCA-MSCs and DS-MSCs significantly increased their migration capacity in the presence of IL-1β compared to the control with PBS. Moreover, IL-1β was a significantly more potent stimulus than TNF-α for the AA-MSC and PB-MSC cell lines (Fig. 7). The results of the present study clearly demonstrated that AA-MSCs, SCA-MSCs, DS-MSCs and PB-MSCs shared similar characteristics in terms of morphology, alkaline phosphatase activity, expression of cell surface and pluripotency-related markers, differentiation ability into adipocytes and proliferative capacity. In addition, all MSC lines analyzed showed in vitro migration ability of mesenchymal cells. Our findings showed that porcine MSCs could be isolated from abdominal adipose, subcutaneous adipose, dermal skin and peripheral blood tissues from an adult male Iberian pig and successfully expanded in vitro. Passaged cells had more homogeneous morphology than primary cultures and formed colonies as the culture progressed. These morphological observations suggest that the isolated cells may contain both mature and progenitor populations as has been demonstrated in previous studies [35,36,37]. The use of MSCs in cell therapy involves in vitro expansion to achieve a sufficient number of cells, which implicitly carries the risk of propagating cells with genetic abnormalities during cell culture. Genetic abnormalities may lead to transformation and poor performance in clinical use, and are a critical safety concern for cell therapies using MSCs [38]. Karyotyping is a practical way to assess genome stability and can be useful as part of initial characterization of an MSC population. AA-MSCs expanded through 10 passages did not show chromosomal translocations, deletions or abnormal chromosome number. Many studies demonstrate that the ability to express alkaline phosphatase activity is a pluripotency marker of stem cells including porcine MSCs from umbilical cord [39] and from skin [40]. However, many other authors do not yield such conclusive results, showing that alkaline phosphatase activity decreases with donor age regardless of the sex of the pig and tissue type [5]. On the other hand, the level of staining of cells expressing alkaline phosphatase activity is not always uniform, varying according to the tissue source studied [5]. There are also studies demonstrating that the expression of alkaline phosphatase varies over time during the assay [41]. Contradictory results have been obtained in studies of tissue-specific MSCs using alkaline phosphatase activity as a measure of stem cell maintenance capability [42]. Ock et al. [43] found that canine adipose MSCs have extremely low AP activity but have a higher potential for differentiation along the osteogenesis and adipogenesis pathways than do other MSC types. Consistent with this, Ock et al. [5] also found that porcine adipose MSCs were more capable of undergoing in vitro differentiation, also having the lowest AP activity. Our MSCs derived from all sources were positive for AP activity. The lowest level was observed in DS-MSCs. Similar results were shown by Song et al. [37], who reported a greater intensity of AP expression in MSCs of adipose origin, compared to MSCs from cutaneous origin. Therefore, to confirm the multipotency of MSCs, we examined the expression of typical markers of multipotent mesenchymal stem cells reported in the literature. Major histocompatibility complex class II (MHCII) molecules are found in antigen-presenting cells such as dendritic cells, mononuclear phagocytes, some endothelial cells, thymic epithelial cells and B cells. The MHCII expression in MSC must be negative [34]. CD34 is an antigen of hematopoietic progenitor cells that should also be absent in MSCs, since these cells do not have hematopoietic characteristics [44]. Vimentin is the main component of the intermediate filament cytoskeleton of mesenchymal cells, involved in adhesion, migration and cell signaling. It is commonly used as a marker for mesenchymal cells and mesenchymal histopathological diagnosis, and has been previously used as a positive cell marker when characterizing porcine mesenchymal cells [34, 45]. CD44 is a cell adhesion surface molecule present in porcine MSCs as demonstrated in numerous studies of cell characterization the same as CD105 and CD90 [16, 45]. A disadvantage of CD105 is a limited cross-reactivity of anti-human antibodies with animal cells [46]. POU5F1 domain Oct-4 transcription factor has been considered one of the main regulators of differentiation and self-renewal of pluripotent stem cells [47]. It is important to note that the expression of POU5F1 can be studied at the level of the protein using western blot assay or immunostaining; or at the mRNA level by PCR amplification methods [48]. Recent studies have reported the detection of this transcription factor in porcine MSCs from umbilical cord, dermal skin, bone marrow and adipose and ovarian tissues [5, 35, 37, 49]. Most of the assays performed indicate that the expression of POU5F1 depends on the cell passage number, cell source and age [42, 50]. The expression of this marker is variable according to the source, reflecting the fact that some mesenchymal cells have greater capacity of stemness than others [5]. Our data demonstrate that MSCs derived from abdominal adipose, subcutaneous adipose, dermal skin and peripheral blood tissues were negative for cytokeratin (except DS-MSCs), MHCII and CD34, but positive for vimentin and POU5F1, and strongly positive for CD44. Expression of POU5F1 was confirmed by flow cytometry in dermal skin MSCs and bone marrow MSCs [5]. Previous studies showed that bone marrow, skin and adipose tissue-derived MSCs were positive for vimentin, but negative for cytokeratin [37]. However, in our analyses, although DS-MSCs were positive for vimentin, they also showed low levels of cytokeratin expression. Cytokeratin is also a component of intermediate filament cytoskeleton but is restricted to epithelial tissues. The expression of these cytokeratins is therefore specific to epithelial cells, making it a cellular marker used for the diagnosis and characterization of tissues. Song et al. [37] have also reported cytokeratin expression in porcine MSCs derived from adipose and ovarian tissue. The ability of MSCs to divide and differentiate could be assessed, at least in part, by evaluating their proliferative capacity. One of the characteristics of mesenchymal cells is their almost unlimited proliferation capacity [34]. Studies show that the proliferative and self-renewing capacity of this type of cells is related to telomerase activity and expression of OCT3/4 [51]. Some reports show that the proliferative capacity of porcine mesenchymal cells decrease as the age of the donor animal increases [52]. Likewise, this property is different according to the type of tissue studied, so that differences between the proliferation rate in mesenchymal cells derived from bone marrow and adipose tissue have been reported [5]. It is important to highlight that in some cases MSCs are able to divide, but to a limited extent, in vitro before entering replicative senescence. Between passages 7 and 12, MSCs increase their cell size and reduce the expression of certain pluripotency markers, leading to proliferative arrest [53, 54]. However, it should also be considered that this event has not been demonstrated in MSCs of all species. All our mesenchymal lines were established from tissue samples of a single adult (2-year-old) Iberian pig and our results indicated that DS-MSCs had the greatest proliferation potential while AA-MSCs showed the longest population doubling time. In addition, all MSC lines had high proliferative capacity until passages 9–11 as shown in the proliferation assay. At that time, robust proliferation was always observed. In this regard, Li et al. [55] reported a novel role for vimentin, highly expressed in our cells, in connection with AFP+ cells and BrdU+ cells, indicating that these cells are activated for proliferation. Multipotent differentiation potential is one of the defined criteria proposed by the ISCT, making MSCs a favorable choice in regenerative therapy [12]. MSCs have a unique quality of multilineage differentiation upon induction with specific differentiation media, supplemented with growth factors. Understanding the molecular mechanism, intracellular pathways and factors responsible for various differentiation abilities of MSCs from different sources has been a matter of great interest in the last decades. Initial investigations were mainly focused on mesodermal differentiation capacities of stem cells; however, with advances in knowledge and technology such as gene targeting and protein engineering, MSC research has reached beyond mesodermal differentiation to multilineage specialized cell differentiation, revolutionizing the field of regenerative medicine. Our data for AA-MSCs, SCA-MSCs, DS-MSCs and PB-MSCs revealed the basic in vitro trilineage differentiation capacity that is adipocytes, osteocytes and chondrocytes, as observed previously in the swine model [56,57,58] and human MSCs [59, 60]. One of the most remarkable findings is the ability of MSCs to migrate from bone marrow or peripheral blood into damaged tissues. MSC are currently being investigated for use in a wide variety of clinical applications. For most of these applications, systemic delivery of the cells is preferred. However, this requires the homing and migration of MSCs to a target tissue. Recently, Almalki et al. [24] reported the migratory activity of porcine AA-MSCs and evaluated the effect of MMP-2, MMP-14 and ATR2 siRNA silencing in this cell line migration. Our results indicated that all MSC lines showed migration activity. The observed nonchemotactic invasion into PBS-containing spots is most likely due to the highly motile nature of these MSC lines. Accordingly, DS-MSCs migrated greater distances than the rest of the cell lines both in the absence or the presence of the inflammatory cytokines TNF-α and IL-1β. SCA-MSCs and DS-MSCs significantly increased their migration capacity in the presence of IL-1β after 48 h compared to the control in PBS. The literature has reported that MSCs exhibit both tissue and donor-related variability, not only in mRNA expression but also with regard to chemokine and cytokine production [61,62,63,64,65]. Future studies will aim at analyzing the degree of individual variability presented by the different MSCs isolated in this work. This report shows for the first time a comparative study of porcine MSCs of different tissue origin, including PB-MSCs. To date, porcine PB-MSCs have only been compared to bone marrow MSCs [30, 66] and AA-MSCs [67]. The migration capacity of porcine AA-MSCs has recently been reported [24], but a comparative study of migration capacity between different lines of porcine MSCs is shown here for the first time. In summary, this study describes the isolation and characterization of porcine cell lines from different tissue origin, with a clear mesenchymal pattern. We show for the first time a comparative study including the migration capacity induced by inflammatory mediators of porcine MSCs of different tissue origin. AA-MSC: Abdominal adipose tissue mesenchymal stem/stromal cell DMEM-LG: Dulbecco's modified Eagle's medium low glucose DS-MSC: Dermal skin tissue mesenchymal stem/stromal cell HBSS: Hank's Balanced Salt Solution MHCII: Major histocompatibility complex II Mesenchymal stem/stromal cell PB-MSC: Peripheral blood mesenchymal stem/stromal cell POU5F1: POU class 5 homeobox 1 SCA-MSC: Subcutaneous adipose tissue mesenchymal stem/stromal cell T/E: Trypsin–ethylenediamine tetraacetic acid Tumor necrosis factor alpha Friedenstein AJ, Gorskaja JF, Kulagina NN. Fibroblast precursors in normal and irradiated mouse hematopoietic organs. Exp Hematol. 1976;4:267–74. Lindner U, Kramer J, Rohwedel J, Schlenke P. Mesenchymal stem or stromal cells: toward a better understanding of their biology. Transfus Med Hemother. 2010;37:75–83. Caplan AI. Mesenchymal stem cells: time to change the name. Stem Cells Transl Med. 2017;6:1445–51. Trohatou O, Roubelakis MG. Mesenchymal stem/stromal cells in regenerative medicine: past, present, and future. Cell Reprogram. 2017;19:217–24. Ock SA, Baregundi Subbarao R, Lee YM, Lee JH, Jeon RH, Lee SL, et al. Comparison of immunomodulation properties of porcine mesenchymal stromal/stem cells derived from the bone marrow, adipose tissue, and dermal skin tissue. Stem Cells Int. 2016;2016:9581350. Carrade DD, Lame MW, Kent MS, Clark KC, Walker NJ, Borjesson DL. Comparative analysis of the immunomodulatory properties of equine adult-derived mesenchymal stem cells. Cell Med. 2012;4:1–11. Fu WL, Li J, Chen G, Li Q, Tang X, Zhang CH. Mesenchymal stem cells derived from peripheral blood retain their pluripotency, but undergo senescence during long-term culture. Tissue Eng Part C Methods. 2015;21:1088–97. Uccelli A, Pistoia V, Moretta L. Mesenchymal stem cells: a new strategy for immunosuppression. Trends Immunol. 2007;28:219–26. Parys M, Kruger JM, Yuzbasiyan-Gurkan V. Evaluation of immunomodulatory properties of feline mesenchymal stem cells. Stem Cells Dev. 2017;26:776–85. Chow L, Johnson V, Coy J, Regan D, Dow S. Mechanisms of immune suppression utilized by canine adipose and bone marrow-derived mesenchymal stem cells. Stem Cells Dev. 2017;26:374–89. Gallardo D, de la Cámara R, Nieto JB, Espigado I, Iriondo A, Jiménez-Velasco A, et al. Is mobilized peripheral blood comparable with bone marrow as a source of hematopoietic stem cells for allogeneic transplantation from HLA-identical sibling donors? A case-control study. Haematologica. 2009;94:1282–8. Squillaro T, Peluso G, Galderisi U. Clinical trials with mesenchymal stem cells: an update. Cell Transplant. 2016;25:829–48. Swindle MM, Makin A, Herron AJ, Clubb FJ, Frazier KS. Swine as models in biomedical research and toxicology testing. Vet Pathol. 2012;49:344–56. Ringe J, Kaps C, Burmester GR, Sittinger M. Stem cells for regenerative medicine: advances in the engineering of tissues and organs. Naturwissenschaften. 2002;89:338–51. Ramírez O, Burgos-Paz W, Casas E, Ballester M, Bianco E, Olalde I, et al. Genome data from a sixteenth century pig illuminate modern breed relationships. Heredity (Edinb). 2015;114:175–84. Gonzalez-Bulnes A, Astiz S, Ovilo C, Lopez-Bote CJ, Torres-Rovira L, Barbero A, et al. Developmental origins of health and disease in swine: implications for animal production and biomedical research. Theriogenology. 2016;86:110–9. Benítez R, Fernández A, Isabel B, Núñez Y, De Mercado E, Gómez-Izquierdo E, et al. Modulatory effects of breed, feeding status, and diet on adipogenic, lipogenic, and lipolytic gene expression in growing Iberian and Duroc pigs. Int J Mol Sci. 2017;19:22. Torres-Rovira L, Astiz S, Caro A, Lopez-Bote C, Ovilo C, Pallares P, et al. Diet-induced swine model with obesity/leptin resistance for the study of metabolic syndrome and type 2 diabetes. ScientificWorldJournal. 2012;2012:510149. Mackenzie TC, Flake AW. Human mesenchymal stem cells persist, demonstrate site-specific multipotential differentiation, and are present in sites of wound healing and tissue regeneration after transplantation into fetal sheep. Blood Cells Mol Dis. 2001;27:601–4. Kawada H, Fujita J, Kinjo K, Matsuzaki Y, Tsuma M, Miyatake H, et al. Nonhematopoietic mesenchymal stem cells can be mobilized and differentiate into cardiomyocytes after myocardial infarction. Blood. 2004;104:3581–7. Koç ON, Gerson SL, Cooper BW, Dyhouse SM, Haynesworth SE, Caplan AI, et al. Rapid hematopoietic recovery after coinfusion of autologous-blood stem cells and culture-expanded marrow mesenchymal stem cells in advanced breast cancer patients receiving high-dose chemotherapy. J Clin Oncol. 2000;18:307–16. Horwitz EM, Prockop DJ, Fitzpatrick LA, Koo WW, Gordon PL, Neel M, et al. Transplantability and therapeutic effects of bone marrow-derived mesenchymal cells in children with osteogenesis imperfecta. Nat Med. 1999;5:309–13. Almalki SG, Agrawal DK. ERK signaling is required for VEGF-A/VEGFR2-induced differentiation of porcine adipose-derived mesenchymal stem cells into endothelial cells. Stem Cell Res Ther. 2017;8:113. Roufosse CA, Direkze NC, Otto WR, Wright NA. Circulating mesenchymal stem cells. Int J Biochem Cell Biol. 2004;36:585–97. Lyahyai J, Mediano DR, Ranera B, Sanz A, Remacha AR, Bolea R, et al. Isolation and characterization of ovine mesenchymal stem cells derived from peripheral blood. BMC Vet Res. 2012;8:169. Spaas JH, De Schauwer C, Cornillie P, Meyer E, Van Soom A, Van de Walle GR. Culture and characterisation of equine peripheral blood mesenchymal stromal cells. Vet J. 2013;195:107–13. Sato K, Yamawaki-Ogata A, Kanemoto I, Usui A, Narita Y. Isolation and characterisation of peripheral blood-derived feline mesenchymal stem cells. Vet J. 2016;216:183–8. Fu Q, Zhang Q, Jia LY, Fang N, Chen L, Yu LM, et al. Isolation and characterization of rat mesenchymal stem cells derived from granulocyte Colony-stimulating factor-mobilized peripheral blood. Cells Tissues Organs. 2015-16;201:412-22. Faast R, Harrison SJ, Beebe LF, McIlfatrick SM, Ashman RJ, Nottle MB. Use of adult mesenchymal stem cells isolated from bone marrow and blood for somatic cell nuclear transfer in pigs. Cloning Stem Cells. 2006;8:166–73. Rodríguez A, Sanz E, De Mercado E, Gómez E, Martín M, Carrascosa C, et al. Reproductive consequences of a reciprocal chromosomal translocation in two Duroc boars used to provide semen for artificial insemination. Theriogenology. 2010;74:67–74. Wiggins H, Rappoport J. An agarose spot assay for chemotactic invasion. BioTechniques. 2010;48:121–4. Miyamoto Y, Skarzynski DJ. Okuda K. Is tumor necrosis factor alpha a trigger for the initiation of endometrial prostaglandin F(2alpha) release at luteolysis in cattle. Biol Reprod. 2000;62:1109–15. Bharti D, Shivakumar SB, Subbarao RB, Rho GJ. Research advancements in porcine derived mesenchymal stem cells. Curr Stem Cell Res Ther. 2016;11:78–93. Kang EJ, Byun JH, Choi YJ, Maeng GH, Lee SL, Kang DH, et al. In vitro and in vivo osteogenesis of porcine skin-derived mesenchymal stem cell-like cells with a demineralized bone and fibrin glue scaffold. Tissue Eng Part A. 2010;16:815–27. Williams KJ, Picou AA, Kish SL, Giraldo AM, Godke RA, Bondioli KR. Isolation and characterization of porcine adipose tissue-derived adult stem cells. Cells Tissues Organs. 2008;188:251–8. Song S-H, Kumar BM, Kang E-J, Lee Y-M, Kim T-H, Ock S-A, et al. Characterization of porcine multipotent stem/stromal cells derived from skin, adipose, and ovarian tissues and their differentiation in vitro into putative oocyte-like cells. Stem Cells Dev. 2011;20:1359–70. Stultz BG, McGinnis K, Thompson EE, Lo Surdo JL, Bauer SR, Hursh DA. Chromosomal stability of mesenchymal stromal cells during in vitro culture. Cytotherapy. 2016;18:336–43. Carlin R, Davis D, Weiss M, Schultz B, Troyer D. Expression of early transcription factors Oct-4, Sox-2 and Nanog by porcine umbilical cord (PUC) matrix cells. Reprod Biol Endocrinol. 2006;4:8. Kumar BM, Yoo JG, Ock SA, Kim JG, Song HJ, Kang EJ, et al. In vitro differentiation of mesenchymal progenitor cells derived from porcine umbilical cord blood. Mol Cells. 2007;24:343–50. Juhásová J, Juhás S, Klíma J, Strnádel J, Holubová M, Motlík J. Osteogenic differentiation of miniature pig mesenchymal stem cells in 2D and 3D environment. Physiol Res. 2011;60:559–71. Chen J, Lu Z, Cheng D, Peng S, Wang H. Isolation and characterization of porcine amniotic fluid-derived multipotent stem cells. PLoS One. 2011;6:e19964. Ock SA, Maeng GH, Lee YM, Kim TH, Kumar BM, Lee SL, et al. Donor-matched functional and molecular characterization of canine mesenchymal stem cells derived from different origins. Cell Transplant. 2013;22:2311–21. Wang X, Zheng F, Liu O, Zheng S, Liu Y, Wang Y, et al. Epidermal growth factor can optimize a serum-free culture system for bone marrow stem cell proliferation in a miniature pig model. In Vitro Cell Dev Biol Anim. 2013;49:815–25. Park BW, Kang DH, Kang EJ, Byun JH, Lee JS, Maeng GH, et al. Peripheral nerve regeneration using autologous porcine skin-derived mesenchymal stem cells. J Tissue Eng Regen Med. 2012;6:113–24. Boxall SA, Jones E. Markers for characterization of bone marrow multipotential stromal cells. Stem Cells Int. 2012;2012:975871. Kashyap V, Rezende NC, Scotland KB, Shaffer SM, Persson JL, Gudas LJ, et al. Regulation of stem cell pluripotency and differentiation involves a mutual regulatory circuit of the NANOG, OCT4, and SOX2 pluripotency transcription factors with polycomb repressive complexes and stem cell microRNAs. Stem Cells Dev. 2009;18:1093–108. Subbarao RB, Ullah I, Kim EJ, Jang SJ, Lee WJ, Jeon RH, et al. Characterization and evaluation of neuronal trans-differentiation with electrophysiological properties of mesenchymal stem cells isolated from porcine endometrium. Int J Mol Sci. 2015;16:10934–51. Kang EJ, Lee YH, Kim MJ, Lee YM, Kumar BM, Jeon BG, et al. Transplantation of porcine umbilical cord matrix mesenchymal stem cells in a mouse model of Parkinson's disease. J Tissue Eng Regen Med. 2013;7:169-82. Ock SA, Jeon BG, Rho GJ. Comparative characterization of porcine mesenchymal stem cells derived from bone marrow extract and skin tissues. Tissue Eng Part C Methods. 2010;16:1481–91. Simonsen JL, Rosada C, Serakinci N, Justesen J, Stenderup K, Rattan SI, et al. Telomerase expression extends the proliferative life-span and maintains the osteogenic potential of human bone marrow stromal cells. Nat Biotechnol. 2002;20:592–6. Rando TA. Stem cells, ageing and the quest for immortality. Nature. 2006;441:1080–6. Wagner W, Horn P, Castoldi M, Diehlmann A, Bork S, Saffrich R, et al. Replicative senescence of mesenchymal stem cells: a continuous and organized process. PLoS One. 2008;3:e2213. Alessio N, Del Gaudio S, Capasso S, Di Bernardo G, Cappabianca S, Cipollaro M, et al. Low dose radiation induced senescence of human mesenchymal stromal cells and impaired the autophagy process. Oncotarget. 2015;6:8155–66. Li B, Zheng YW, Sano Y, Taniguchi H. Evidence for mesenchymal-epithelial transition associated with mouse hepatic stem cell differentiation. PLoS One. 2011;6:e17092. Dariolli R, Bassaneze V, Nakamuta JS, Omae SV, Campos LC, Krieger JE. Porcine adipose tissue-derived mesenchymal stem cells retain their proliferative characteristics, senescence, karyotype and plasticity after long-term cryopreservation. PLoS One. 2013;8:e67939. Qu CQ, Zhang GH, Zhang LJ, Yang GS. Osteogenic and adipogenic potential of porcine adipose mesenchymal stem cells. In Vitro Cell Dev Biol Anim. 2007;43:95–100. Arrigoni E, Lopa S, de Girolamo L, Stanco D, Brini AT. Isolation, characterization and osteogenic differentiation of adipose-derived stem cells: from small to large animal models. Cell Tissue Res. 2009;338:401–11. Zuk PA, Zhu M, Ashjian P, De Ugarte DA, Huang JI, Mizuno H, et al. Human adipose tissue is a source of multipotent stem cells. Mol Biol Cell. 2002;13:4279–95. Blande IS, Bassaneze V, Lavini-Ramos C, Fae KC, Kalil J, Miyakawa AA, et al. Adipose tissue mesenchymal stem cell expansion in animal serum-free medium supplemented with autologous human platelet lysate. Transfusion. 2009;49:2680–5. Zhukareva V, Obrocka M, Houle JD, Fischer I, Neuhuber B. Secretion profile of human bone marrow stromal cells: donor variability and response to inflammatory stimuli. Cytokine. 2010;50:317–21. Paradisi M, Alviano F, Pirondi S, Lanzoni G, Fernandez M, Lizzo G, et al. Human mesenchymal stem cells produce bioactive neurotrophic factors: source, individual variability and differentiation issues. Int J Immunopathol Pharmacol. 2014;27:391–402. Vakhrushev IV, Vdovin AS, Strukova LA, Yarygin KN. Variability of the phenotype and proliferation and migration characteristics of human mesenchymal stromal cells derived from the deciduous teeth pulp of different donors. Bull Exp Biol Med. 2016;160:525–9. Lavoie JR, Creskey MM, Muradia G, Bell GI, Sherman SE, Gao J, et al. Brief report: elastin microfibril Interface 1 and integrin-linked protein kinase are novel markers of islet regenerative function in human multipotent mesenchymal stromal cells. Stem Cells. 2016;34:2249–55. Paladino FV, Sardinha LR, Piccinato CA, Goldberg AC. Intrinsic variability present in Wharton's jelly mesenchymal stem cells and T cell responses may impact cell therapy. Stem Cells Int. 2017;2017:8492797. Heino TJ, Alm JJ, Moritz N, Aro HT. Comparison of the osteogenic capacity of minipig and human bone marrow-derived mesenchymal stem cells. J Orthop Res. 2012;30:1019–25. Yang Z, Vajta G, Xu Y, Luan J, Lin M, Liu C, et al. Production of pigs by hand-made cloning using mesenchymal stem cells and fibroblasts. Cell Reprogram. 2016;18:256–63. The authors are grateful to Dra María Yáñez-Mó (Dpto. de Biología Molecular, UAM, Madrid, Spain) for critical reading of the manuscript. This work was supported by grants from the Spanish Ministerio de Economía Industria y Competitividad to MAR (AGL2015-70140-R) and European Union's Horizon 2020 Research and Innovation Programme under grant agreement N°731014 (MAR). The materials used and/or analyzed during the current study are available from the corresponding author on reasonable request. Departamento de Reproducción Animal, Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria, Avenida Puerta de Hierro 12, local 10, 28040, Madrid, Spain Alexandra Calle, Clara Barrajón-Masa, Ernesto Gómez-Fidalgo, Mercedes Martín-Lluch, Paloma Cruz-Vigo, Raúl Sánchez-Sánchez & Miguel Ángel Ramírez Alexandra Calle Clara Barrajón-Masa Ernesto Gómez-Fidalgo Mercedes Martín-Lluch Paloma Cruz-Vigo Raúl Sánchez-Sánchez Miguel Ángel Ramírez MAR conceived and designed the experiments. AC and MAR carried out experiments. CB-M carried out the cell proliferation measurement. EG-F and RS-S helped with immunocytochemical analysis by flow cytometry. MM-L and PC-V performed the cell metaphase and karyotype analysis. AC and MAR analyzed the data. MAR wrote the paper. All authors read and approved the final manuscript. Correspondence to Miguel Ángel Ramírez. All experimental procedures complied with the basic standards for the protection of animals used for experimental and other scientific purposes including teaching, stipulated by Ministry of Agriculture, Food and Environment. The procedures used in animals have an established Animal Use Protocol approved by the Ethics Committee Animal Experimentation at INIA. Animal manipulations were performed according to the Spanish Policy for Animal Protection RD1201/05, which meets the European Union Directive 86/609 about the protection of animals used in research. Tissue samples were taken from an Iberian boar housed in the INIA Animal Laboratory Unit (Madrid, Spain), which meets the requirements of the European Union for Scientific Procedure Establishments. Calle, A., Barrajón-Masa, C., Gómez-Fidalgo, E. et al. Iberian pig mesenchymal stem/stromal cells from dermal skin, abdominal and subcutaneous adipose tissues, and peripheral blood: in vitro characterization and migratory properties in inflammation. Stem Cell Res Ther 9, 178 (2018). https://doi.org/10.1186/s13287-018-0933-y Revised: 12 June 2018 Mesenchymal stem/stromal cells Iberian pig Cell migration
CommonCrawl
\begin{document} \begin{abstract} We show various properties of numerical data of an embedded resolution of singularities for plane curves, which are inspired by a conjecture of Igusa on exponential sums. \end{abstract} \title{On the log canonical threshold and numerical data of a resolution in dimension 2} \pagestyle{myheadings} \markboth{{\normalsize W. Veys}}{ {\normalsize Numerical data}} \renewcommand{{\rm div}}{{\rm div}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathbb S}{{\mathbb S}} \newcommand{\mbox{\boldmath$a$}}{\mbox{\boldmath$a$}} \newcommand{\mbox{\boldmath$b$}}{\mbox{\boldmath$b$}} \newcommand{\mbox{\boldmath$c$}}{\mbox{\boldmath$c$}} \newcommand{\mbox{\boldmath$e$}}{\mbox{\boldmath$e$}} \newcommand{\mbox{\boldmath$i$}}{\mbox{\boldmath$i$}} \newcommand{\mbox{\boldmath$j$}}{\mbox{\boldmath$j$}} \newcommand{\mbox{\boldmath$v$}}{\mbox{\boldmath$v$}} \newcommand{\mbox{\boldmath$k$}}{\mbox{\boldmath$k$}} \newcommand{\mbox{\boldmath$m$}}{\mbox{\boldmath$m$}} \newcommand{\mbox{\boldmath$s$}}{\mbox{\boldmath$s$}} \newcommand{\mbox{\boldmath$SW$}}{\mbox{\boldmath$SW$}} \newcommand{\mbox{\boldmath$f$}}{\mbox{\boldmath$f$}} \newcommand{\mbox{\boldmath$g$}}{\mbox{\boldmath$g$}} \newcommand{{\mathfrak q}}{{\mathfrak q}} \newcommand{o}{o} \newcommand{{\tiny\vee}}{{\tiny\vee}} \newcommand{g}{g} \newcommand{{l}}{{l}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\tilde{X}}{\tilde{X}} \newcommand{\tilde{Z}}{\tilde{Z}} \newcommand{{\mathcal L}}{{\mathcal L}} \newcommand{{\mathcal M}}{{\mathcal M}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\mathcal O}}{{\mathcal O}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{{\mathcal I}}{{\mathcal I}} \newcommand{{\mathcal J}}{{\mathcal J}} \newcommand{{\mathcal C}}{{\mathcal C}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\mathcal Q}}{{\mathcal Q}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{\langle \chi_0\rangle}{\langle \chi_0\rangle} \newcommand{\bar{C}}{\bar{C}} \newcommand{\varphi}{\varphi} \def\mbox{mod}{\mbox{mod}} \let\d\partial \def\mathcal E{\mathcal E} \newcommand{\EuScript{C}}{\EuScript{C}} \def\mathbb C{\mathbb C} \def\mathbb Q{\mathbb Q} \def\mathbb R{\mathbb R} \def\mathbb S{\mathbb S} \def\mathbb H{\mathbb H} \def\mathbb B}\def\bC{\mathbb C}\def\bA{\mathbb A{\mathbb B}\def\bC{\mathbb C}\def\bA{\mathbb A} \def\mathbb Z{\mathbb Z} \def\mathbb N{\mathbb N} \def\mathbb N{\mathbb N} \def\mathbb P}\def\bt{\mathbb T{\mathbb P}\def\bt{\mathbb T} \def$ \square${$ \square$} \def(\, , \,){(\, , \,)} \def\mbox{coker}{\mbox{coker}} \def{\rm Im}{{\rm Im}} \newcommand{{G}}{{G}} \newcommand{\noindent}{\noindent} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathcal E}}{{\mathcal E}} \newcommand{{\mathcal W}}{{\mathcal W}} \newcommand{{\mathcal V}}{{\mathcal V}} \newcommand{{\mathcal P}}{{\mathcal P}} \newcommand{\calI}{{\mathcal I}}\newcommand{\calJ}{{\mathcal J}} \newcommand{\calA}{{\mathcal A}}\newcommand{\CalA}{{\calA_F\cup\calA_W}} \newcommand{{\mathcal A}'}{{\mathcal A}'} \newcommand{{\mathcal B}}{{\mathcal B}} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{\calG}{{\mathcal G}}\newcommand{\calN}{{\mathcal N}} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{B_{\epsilon_0}}{B_{\epsilon_0}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{S_{\epsilon_0}}{S_{\epsilon_0}} \newcommand{\epsilon}{\epsilon} \newcommand{ }{ } \newcommand{\sigma}{\sigma} \newcommand{S}{S} \newcommand{G_\pi(X)}{G_\pi(X)} \newcommand{G_\pi(X,f)}{G_\pi(X,f)} \newcommand{G_\pi(X,F)}{G_\pi(X,F)} \newcommand{\Gamma_\pi(X)}{\Gamma_\pi(X)} \newcommand{\Gamma_\pi(X,f)}{\Gamma_\pi(X,f)} \newcommand{\Gamma_\pi(X,F)}{\Gamma_\pi(X,F)} \newcommand{\Gamma_\pi(X,F,W)}{\Gamma_\pi(X,F,W)} \newcommand{w}{w} \newcommand{\label}{\label} \section{Introduction} Singularity invariants of a hypersurface are often described in terms of a chosen embedded resolution. In particular, the so--called numerical data $(N_i, \nu_i)$ of a resolution are crucial in various invariants, e.g. poles of zeta functions of Igusa type \cite{I2}\cite{DL1}\cite{DL2}, jumping coefficients of multiplier ideals \cite{ELSV}, roots of Bernstein--Sato polynomials \cite{K}, monodromy eigenvalues \cite{AC}, etc. In particular $\min_i \frac {\nu_i}{N_i}$ does not depend on the chosen resolution, and is nowadays called the log canonical threshold, see e.g. \cite{M}. Let $f$ be a polynomial in $n$ variables. In a previous version of the manuscript \cite{CMN} an equivalence was shown between a statement on numerical data of an embedded resolution of $f$ and a famous old conjecture on exponential sums of Igusa \cite{I1} (as well as with a local version of that conjecture by Denef--Sperber \cite{DS} and a more general version by Cluckers--Veys \cite{CV}). In the present paper, we present a proof of that statement in dimension $n=2$. After finishing this work, we learned that Cluckers--Musta\c t\u a--Nguyen proved the statement in arbitrary dimension, using techniques from the Minimal Model Program. We think however that various aspects of our more elementary proof of the two--dimensional case are of independent interest. In \S 2 we fix notation and state the Conjecture/Theorem on numerical data of Cluckers--Musta\c t\u a--Nguyen. An important ingredient in our proof for $n=2$ is a new property/formula for the numbers $\nu_i$ in terms of the dual resolution graph for plane curves, with appropriate decorations along edges, which we establish in \S 3. Then, in \S 4, we show (a somewhat stronger version of) the statement in \cite{CMN}. \section{Preliminaries} Let $f \in \mathbb C[x]=\mathbb C[x_1,\dots,x_n]$ be a nonconstant polynomial. Fix an embedded resolution $\pi$ of $f$, that is, $\pi:Y\to \mathbb C^n$ is a proper birational morphism satisfying (i) $Y$ is a (complex) nonsingular algebraic variety, (ii) $\pi$ is an isomorphism outside $\pi^{-1}\{f=0\}$, (iii) $\pi^{-1}\{f=0\}$ is a simple normal crossings divisor. We denote by $E_i, i\in T,$ the (nonsingular) irreducible components of $\pi^{-1}\{f=0\}$. Let $N_i$ and $\nu_i-1$ denote the multiplicity of $E_i$ in the divisor of $\pi^*f=f\circ \pi$ and $\pi^*(dx_1\wedge\dots\wedge dx_n)$, respectively. In other words, ${\rm div}(f \circ \pi) = \sum_{i\in T} N_iE_i$ and the canonical divisor $K_Y=K_{\pi}=\sum_{i\in T} (\nu_i-1)E_i$. The $(N_i,\nu_i)_{i\in T}$ are called the {\em numerical data} of $\pi$. In order to formulate the statement of \cite{CMN}, we need the notion of power condition. The normal crossings condition says that, for any point $P\in Y$, there is an affine neighbourhood $V$ of $P$, such that \begin{equation}\label{nc} f\circ\pi = u\prod_{i\in I} y_i^{N_i} , \end{equation} for some $I\subset T$, in the coordinate ring $\mathcal{O}_V$ of $V$. Here $i\in I$ if and only if $P\in E_i$, $u$ is a unit in $\mathcal{O}_V$, the component $E_i$ is given by $y_i=0$ and the $(y_i)_{i\in I}$ form a regular sequence in the local ring of $Y$ at $P$. Here we only state the local version of the power condition; this is the relevant one for the present paper. Also, we only need the local version of the log canonical threshold. Assuming that $f(0)=0$, the {\em log canonical threshold of $f$ at $0\, (\in \mathbb C^n)$} is $$ c_0 = \min_{i\in T, 0\in \pi(E_i)} \nu_i/N_i . $$ \begin{definition}\label{power} Let $f$ and $\pi$ be as above. Let $d\in \mathbb Z_{\geq 2}$. We say that $(f,\pi)$ {\em satisfies the $d$--power condition} if there exists a nonempty open $W$ in some irreducible component of some $\cap_{i\in I} E_i$ with $\pi(W)=\{0\}$ and some $g\in \mathcal{O}_W$ such that $$ d\mid N_i \text{ for all } i\in I $$ and $$ u |_W = g^d , $$ where $u$ is as in (\ref{nc}) on an open $V$ satisfying $W= (\cap_{i\in I} E_i) \cap V$. \end{definition} \begin{conj/thm}[\cite{CMN}]\label{CNconj} Let $f$ and $\pi$ be as above. If $(f,\pi)$ satisfies the $d$--power condition, then \begin{equation}\label{CNinequality} c_0 \leq \frac 1d + \sum_{i\in I} N_i (\frac{\nu_i}{N_i} - c_0 ). \end{equation} \end{conj/thm} We prefer to rewrite this inequality in the form \begin{equation}\label{formulation} c_0 \leq \frac{\sum_{i\in I} \nu_i + 1/d}{\sum_{i\in I} N_i + 1} . \end{equation} \begin{remark}\label{trivial} We note that Conjecture \ref{CNconj} is trivial when $c_0 \leq 1/d$. This happens in particular when $d \mid N_\ell$ for some component $E_\ell$ of the strict transform of $f$. \end{remark} When $|I|=n$ in Definition \ref{power}, the power condition is automatically satisfied, since then $W=\cap_{i\in I} E_i$ is a point and $u |_W$ is a constant. Otherwise, the following property is a useful corollary of the power condition. It is shown in \cite{CMN}, but follows also from an easy local computation, using unique factorization in a regular local ring. \begin{proposition}\label{d-divisible} Let $f$ and $\pi$ be as above. If $(f,\pi)$ satisfies the $d$--power condition, through the open $W$, then we have $d \mid N_j$ for all $j\in T$ satisfying $E_j \cap \overline{W} \neq \emptyset$. \end{proposition} \section{Description of $\nu$ via dual graph} From now on we consider the plane curve case $n=2$, and we take $\pi$ as the {\em minimal} embedded resolution of $f$. In fact, we can as well study the germ of $f$ at the origin and allow $f$ to be an analytic function rather than just a polynomial. At any rate, we slightly redefine $T$ as $T_e\cup T_s$, where $T_e$ runs over the {\em exceptional components} of $\pi$ and $T_s$ runs over the {\em analytically irreducible components of the strict transform} of $f$ by $\pi$. In the (dual) resolution graph $\Gamma$ of $\pi$ one associates to each exceptional curve $E_i$ a vertex, which we denote here for simplicity also by $E_i$, and an arrowhead to each (analytically) irreducible component $E_i$ of the strict transform of $f$. Each intersection between components $E_i$ is indicated by an edge connecting the corresponding vertices or arrowhead. We denote here by $\Gamma^e$ the restriction of $\Gamma$ to the exceptional locus, i.e., without the arrows. For $i\in T_e$ we denote by $\delta_i$ the {\em valency} of $E_i$ in $\Gamma^e$, that is, the number of intersections of $E_i$ with other exceptional components, and hence also the number of edges in $\Gamma^e$ connected to $E_i$. We use the language of Eisenbud--Neumann diagrams \cite{EN}, associated to the (full) dual resolution graphs $\Gamma$ and $\Gamma^e$, where edges are decorated as follows. For $i\in T_e$, an edge decoration $a$ next to $E_i$ along an edge $e$ adjacent to an exceptional $E_i$, indicates that $a$ is the absolute value of the determinant of the intersection matrix of all exceptional components appearing in the subgraph of $\Gamma \setminus \{E_i\}$ in the direction of $e$. These decorations satisfy the following properties. \begin{itemize} \item All edge decorations are positive integers. \item The edge decorations along all edges next to a fixed $E_i$ are pairwise coprime and at most two of them are greater than $1$. \item Fix an edge $e$ in $\Gamma$ between vertices $E_i$ and $E_j$ (thus corresponding to exceptional components). Let $a$ and $b$ be the decorations along $e$ next to $E_i$ and $E_j$, respectively. Let also $a_k$ and $b_\ell$ denote the edge decorations along the other edges, connected to $E_i$ and $E_j$, respectively. Then we have the {\em edge determinant rule} $$ab - \prod_{k} a_k \prod_{\ell}b_\ell =1 $$ (where a product over the empty set is $1$). \end{itemize} \begin{picture}(500,60)(50,-20) \put(240,20){\circle*{4}} \put(240,20){\line(1,0){80}} \put(320,20){\circle*{4}} \put(240,20){\line(-2, -1){30}} \put(240,20){\line(-2, 1){30}} \put(220,23){\makebox(0,0){$\vdots$}} \put(320,20){\line(2,-1){30}} \put(320,20){\line(2, 1){30}} \put(337,23){\makebox(0,0){$\vdots$}} \put(242,-10){\makebox(0,0){$E_i$}} \put(320,-10){\makebox(0,0){$E_j$}} \put(250,24){\makebox(0,0){$a$}} \put(313,26){\makebox(0,0){$b$}} \put(327,32) {\makebox(0,0){$b_1$}} \put(328,10){\makebox(0,0){$b_n$}} \put(233,30) {\makebox(0,0){$a_1$}} \put(234,10){\makebox(0,0){$a_m$}} \end{picture} \noindent In fact, these properties are also valid for the dual graph of a non--minimal embedded resolution. Using that $\pi$ is minimal we also have \begin{itemize} \item An edge decoration along an edge that is the start of a chain of exceptional components, ending in a vertex of valency $1$ of $\Gamma$, is greater than $1$. \end{itemize} \begin{example}\label{example} Take $f = (y^2-x^3)^2-x^5y$. The decorated dual graph $\Gamma$ of its minimal embedded resolution is as follows, where we also indicate the numerical data $(N_i,\nu_i)$ of the $E_i$. Note that $c_0= 5/12$. \begin{picture}(400,100)(-30,-35) \put(150,25){\vector(-1,0){50}} \put(150,25){\circle*{4}} \put(220,25){\circle*{4}} \put(290,25){\circle*{4}} \put(150,-25){\circle*{4}} \put(220,-25){\circle*{4}} \put(150,25){\line(1,0){140}} \put(150,25){\line(0,-1){50}} \put(220,25){\line(0,-1){50}} \put(213,31){\makebox(0,0){$1$}}\put(158,31){\makebox(0,0){$13$}} \put(154,15){\makebox(0,0){$2$}} \put(154,-17){\makebox(0,0){$7$}} \put(283,31){\makebox(0,0){$1$}}\put(226,31){\makebox(0,0){$3$}} \put(215,-17){\makebox(0,0){$2$}} \put(215,15){\makebox(0,0){$2$}} \put(150,45){\makebox(0,0){$E_5(26,11)$}} \put(220,45){\makebox(0,0){$E_3(12,5)$}} \put(290,45){\makebox(0,0){$E_1(4,2)$}} \put(122,-27){\makebox(0,0){$E_4(13,6)$}} \put(246,-27){\makebox(0,0){$E_2(6,3)$}} \put(78,25){\makebox(0,0){$E_0(1,1)$}} \end{picture} \end{example} We have the following well known \lq diagram calculus\rq, computing the numerical data $(N_i,\nu_i)$ of an exceptional curve $E_i$ in terms of the edge decorations of the graph $\Gamma$. See for instance \cite{EN} and \cite{NV}. (It provides another way to compute the numerical data in Example \ref{example}.) \begin{proposition}\label{N-theorem} Fix an exceptional curve $E_i$. For any another component $E_j$, let $\ell_{ij}$ be the product of the edge decorations that are adjacent to, but not on, the path in $\Gamma$ from $E_i$ to $E_j$. Then \begin{equation}\label{N-formula} N_i = \sum_{j\in T_s} \ell_{ij} N_j , \end{equation} \begin{equation}\label{old-nu-formula} \nu_i = \sum_{j\in T_e} \ell_{ij} (2-\delta_j). \end{equation} \end{proposition} We now show a useful upper bound for $\nu_i$, depending only on the edge decorations along $E_i$, that is often even an equality. \begin{theorem}\label{nu-theorem} (1) Let $E$ be a vertex of valency at least $2$ in $\Gamma^e$. Say $a$ and $b$ are edge decorations at $E$ such that all other edge decorations at $E$ are $1$. (Possibly also $a$ or $b$ are $1$.) Then we have $\nu \leq a+b$. More precisely, we have the following. (i) If starting from $E$, say in the $a$--decorated edge direction, there exists in some part of $\Gamma^e$ a vertex of valency at least $3$ as in the figure below, where both $c>1$ and $d>1$, then $\nu \leq a-b$. \begin{picture}(500,60)(50,-10) \put(240,20){\circle*{4}} \dashline[3]{3}(270,20)(290,20) \put(320,20){\circle*{4}} \put(240,20){\line(1,0){20}}\put(300,20){\line(1,0){20}} \dashline[3]{3}(240,20)(210,5) \put(240,20){\line(-2, 1){30}} \put(220,23){\makebox(0,0){$\vdots$}} \put(320,20){\line(2,-1){30}} \put(320,20){\line(2, 1){30}} \put(337,23){\makebox(0,0){$\vdots$}} \put(242,10){\makebox(0,0){$E$}} \put(250,24){\makebox(0,0){$a$}} \put(313,26){\makebox(0,0){$1$}} \put(325,28) {\makebox(0,0){$c$}} \put(325,10){\makebox(0,0){$d$}} \put(233,30) {\makebox(0,0){$b$}} \end{picture} (ii) If there is no such vertex (in any direction starting from $E$), then $\nu=a+b$. \noindent (2) Let $E$ be a vertex of valency $1$ in $\Gamma^e$, with edge decoration $a$. Then $\nu \leq a+1$, and more precisely, with the analogous case distinction, either $\nu \leq a-1$ or $\nu = a+1$. \end{theorem} \noindent {\em Note.} There can be at most one direction as in (i) starting from $E$, which is well known and will also be clear from the proof. \begin{proof} Consider any vertex $E_j$ of $\Gamma^e$ with some adjacent edge decoration equal to $1$, such that the subgraph $\Gamma_j$ in the direction of this edge does {\em not} contain $E$. (Possibly $E_j=E$.) \begin{picture}(450,60)(50,-10) \put(240,20){\circle*{4}} \dashline[3]{3}(240,20)(210,5) \put(320,20){\makebox(0,0){$\Gamma_j$}} \put(240,20){\line(1,0){30}} \put(290,20){\makebox(0,0){$\hdots$}} \put(240,20){\line(-2, 1){30}} \put(220,23){\makebox(0,0){$\vdots$}} \put(242,10){\makebox(0,0){$E_j$}} \put(250,26){\makebox(0,0){$1$}} \end{picture} \noindent We claim that, in order to compute $\nu$, we can contract/forget the subgraph $\Gamma_j$. Indeed, since the absolute value of the determinant of the intersection matrix of $\Gamma_j$ is $1$, all the exceptional curves in $\Gamma_j$ can be blown down. We can consider this \lq blown down situation\rq\ as an intermediate step in constructing $\pi$, and $\nu$ can be computed on the graph of that intermediate step. (Alternatively, one can prove the claim using an elementary computation with formula (\ref{old-nu-formula}).) Now we contract/delete all such subgraphs. The resulting graph $\Gamma_0$, corresponding to some intermediate step in constructing $\pi$, must satisfy one of the two following properties. (i) There is still a vertex of valency at least $3$ in $\Gamma_0$, say in the $a$--decorated edge direction. Then $\Gamma_0$ is necessarily of the form below, where all $a_i, b_i >1$. When $b>1$, the part of $\Gamma_0$ in the $b$--decorated edge direction is a chain, and when $b=1$, the vertex $E$ has valency $1$ in $\Gamma_0$. \begin{picture}(405,85)(-5,-30) \put(50,40){\circle*{4}} \put(110,40){\circle*{4}} \put(170,40){\circle*{4}} \put(250,40){\circle*{4}} \put(310,40){\circle*{4}} \put(370,40){\circle*{4}} \put(110,-20){\circle*{4}} \put(170,-20){\circle*{4}} \put(250,-20){\circle*{4}} \put(310,-20){\circle*{4}} \dashline[3]{3}(20,40)(50,40) \put(50,40){\line(1,0){20}} \dashline[3]{3}(75,40)(85,40) \put(90,40){\line(1,0){40}} \dashline[3]{3}(135,40)(145,40) \put(150,40){\line(1,0){40}} \put(230,40){\line(1,0){40}} \dashline[3]{3}(275,40)(285,40) \put(290,40){\line(1,0){40}} \dashline[3]{3}(335,40)(345,40) \put(350,40){\line(1,0){20}} \put(170,40){\line(0,-1){20}} \dashline[3]{3}(170,15)(170,5) \put(170,0){\line(0,-1){20}} \put(110,40){\line(0,-1){20}} \dashline[3]{3}(110,15)(110,5) \put(110,0){\line(0,-1){20}} \put(250,40){\line(0,-1){20}} \dashline[3]{3}(250,15)(250,5) \put(250,0){\line(0,-1){20}} \put(310,40){\line(0,-1){20}} \dashline[3]{3}(310,15)(310,5) \put(310,0){\line(0,-1){20}} \put(40,45){\makebox(0,0){$b$}} \put(102,45){\makebox(0,0){$1$}} \put(160,45){\makebox(0,0){$1$}} \put(240,45){\makebox(0,0){$1$}} \put(300,45){\makebox(0,0){$1$}} \put(120,45){\makebox(0,0){$a_1$}} \put(180,45){\makebox(0,0){$a_2$}} \put(262,45){\makebox(0,0){$a_{r-1}$}} \put(320,45){\makebox(0,0){$a_r$}} \put(58,45){\makebox(0,0)[l]{$a$}} \put(112,30){\makebox(0,0)[l]{$b_1$}} \put(172,30){\makebox(0,0)[l]{$b_2$}} \put(252,30){\makebox(0,0)[l]{$b_{r-1}$}} \put(312,30){\makebox(0,0)[l]{$b_r$}} \put(208,40){\makebox(0,0){$\ldots$}} \put(50,28){\makebox(0,0){$E$}} \end{picture} \noindent Note that in our resolution graphs there can be at most one vertex of valency at least $3$ with two attached chains and both edge decorations larger than $1$ (as on the most right part of the figure above). Indeed, by a contraction argument as before, we can consider the subgraph consisting of only that vertex and the two attached chains as corresponding to some intermediate step of $\pi$, and then this subgraph must contain the first created exceptional curve as a vertex. Using formula (\ref{old-nu-formula}) we have $$\aligned \nu = \, &a+b[ a_1-a_1b_1+(a_2-a_2b_2)b_1+\dots+(a_{r-1}-a_{r-1}b_{r-1})b_1b_2\dots b_{r-2} \\ &+ (a_r+b_r-a_rb_r)b_1b_2\dots b_{r-1}]. \endaligned$$ Since all $a_i-a_ib_i$ and also $a_r+b_r-a_rb_r$ are negative, we have that $\nu \leq a-b$. (ii) There is no vertex of valency $3$ in $\Gamma_0$. Then we have $\nu=a+b$. (If $a$ or $b$ are equal to $1$, then $E$ has valency at most $1$ in $\Gamma_0$.) \noindent The proof of (2) is completely analogous. \end{proof} \begin{example}[continuing Example \ref{example}] All different cases of Theorem \ref{nu-theorem} occur in the example. In particular, $E_5$ and $E_4$ satisfy the inequality \lq $\nu\leq a-b$\rq, and both inequalities are sharp here. \end{example} \section{Proof of the main theorem} In dimension 2 we only have the cases $|I|=1$ and $|I|=2$. By Remark \ref{trivial} and Proposition \ref{d-divisible} we may and will assume that \begin{itemize} \item when $I=\{i\}$, the component $E_i$ is exeptional and does not intersect the strict transform of $f$, \item when $I=\{i,j\}$ (with $i\neq j$), the components $E_i$ and $E_j$ are exceptional. \end{itemize} \noindent We will in fact show a slightly stronger statement than (\ref{formulation}). \begin{theorem}\label{main theorem} Let $d \in \mathbb Z_{\geq 2}$. (1) Let $E_i$ be an exceptional component such that $d \mid N_i$ and $d \mid N_\ell$ for all components $E_\ell$ intersecting $E_i$. Then either $$\frac{\nu_i}{N_i} \leq \frac 1d \qquad\text{or}\qquad \frac {\nu_\ell}{N_\ell} \leq \frac{ \nu_i + 1/d}{ N_i + 1}$$ for some intersecting component $E_\ell$. (2) Let $E_i$ and $E_j$ be intersecting exceptional components such that $d \mid N_i$ and $d \mid N_j$. Then (up to a switch of the indices) $$\frac{\nu_i}{N_i}\leq \frac 1d \qquad\text{or}\qquad \frac {\nu_j}{N_j} \leq \frac{ \nu_i + 1/d}{ N_i + 1} .$$ \end{theorem} More precisely, we will argue by case distinction, depending on the position of $E_i$ and $E_j$ in the graph $\Gamma$. These different results could be of interest for future reference; for that reason we formulate them in separate independent statements. \begin{lemma}\label{lemma1} Let $E_i$ and $E_j$ be adjacent vertices on the graph $\Gamma$. Let $d \in \mathbb Z_{\geq 2}$ such that $d \mid N_i$ and $d \mid N_j$. Suppose that there exist arrows in $\Gamma$ on both sides of the edge between $E_i$ and $E_j$. Then $\nu_i/N_i \leq 1/d $ and $\nu_j/N_j \leq 1/d $. \end{lemma} \begin{picture}(500,60)(50,-10) \put(220,23){\makebox(0,0){$\vdots$}} \put(240,20){\circle*{4}} \dashline[3]{3}(320,20)(350,5) \put(320,20){\circle*{4}} \put(240,20){\line(1,0){80}} \dashline[3]{3}(240,20)(210,5) \put(240,20){\line(-2, 1){30}} \put(220,23){\makebox(0,0){$\vdots$}} \put(320,20){\line(2, 1){30}} \put(337,23){\makebox(0,0){$\vdots$}} \put(242,0){\makebox(0,0){$E_i$}} \put(320,0){\makebox(0,0){$E_j$}} \put(250,24){\makebox(0,0){$a$}} \put(313,26){\makebox(0,0){$p$}} \put(327,31) {\makebox(0,0){$q_\ell$}} \put(233,31) {\makebox(0,0){$b_k$}} \end{picture} \begin{proof} Let $a$ and $p$ be the edge decorations at $E_i$ and $E_j$ on the edge connecting them, and $b_k$ and $q_\ell$ the other edge decorations at $E_i$ and $E_j$, respectively. By Proposition \ref{N-theorem} we have that $$N_i = La+R\prod_k b_k \qquad\text{and}\qquad N_j=L\prod_\ell q_\ell+Rp ,$$ where $L$ and $R$ describe the total contribution in formula (\ref{N-formula}) of arrows \lq on the left of $E_i$\rq\ and \lq on the right of $E_j$\rq, respectively. Since $ap-\prod_k b_k \prod_\ell q_\ell=1$ (edge determinant rule), we derive that $$N_i p - N_j \prod_k b_k = L \qquad\text{and}\qquad N_j a - N_i \prod_\ell q_\ell =R ,$$ and hence that $d \mid L$ and $d \mid R$. If two of the $b_k$ are greater than $1$, say $b_1$ and $b_2$, then $a=1$ and $N_i=L+Rb_1b_2$. By Theorem \ref{nu-theorem} we have that $\nu_i \leq b_1+b_2$ and consequently $$ \frac{\nu_i}{N_i} \leq \frac{b_1+b_2}{d(\frac Ld + \frac Rd b_1b_2)} < \frac 1d .$$ If on the other hand at most one of the $b_k$ is greater than $1$, say (at most) $b_1$, then $N_i=La+Rb_1$. Now we have by Theorem \ref{nu-theorem} that $\nu_i \leq a+b_1$ and consequently $$ \frac{\nu_i}{N_i} \leq \frac{a+b_1}{d(\frac Ld a + \frac Rd b_1)} \leq \frac 1d .$$ By symmetry the same results holds for $\nu_j/N_j$. \end{proof} \begin{example} Take $f=x^2(y^2-x^4)$. Its minimal embedded resolution provides an easy illustration of Lemma \ref{lemma1} with $d=2$, where moreover both inequalities are sharp. \begin{picture}(400,60)(50,-10) \put(240,20){\circle*{4}} \put(320,20){\circle*{4}} \put(240,20){\line(1,0){80}} \put(240,20){\vector(-1,0){50}} \put(320,20){\vector(3, 1){40}} \put(320,20){\vector(3, -1){40}} \put(376,36){\makebox(0,0){$(1,1)$}} \put(376,4){\makebox(0,0){$(1,1)$}} \put(172,20){\makebox(0,0){$(2,1)$}} \put(242,7){\makebox(0,0){$E_1(4,2)$}} \put(317,7){\makebox(0,0){$E_2(6,3)$}} \put(250,26){\makebox(0,0){$1$}} \put(313,26){\makebox(0,0){$2$}} \end{picture} \end{example} \begin{lemma}\label{lemma2} Let $E_i$ and $E_j$ be adjacent vertices on the graph $\Gamma$. Let $d \in \mathbb Z_{\geq 2}$ such that $d \mid N_i$ and $d \mid N_j$. Suppose that, besides the edge in the direction of $E_i$, the vertex $E_j$ is adjacent precisely to a subgraph of $\Gamma$ of the following form (where the valency of $E_j$ in $\Gamma$ can be $2$ or $3$, and there is at least one vertical chain). Then $\nu_i/N_i < 1/d $. \end{lemma} \begin{picture}(230,110)(-15,-30) \put(40,43){\makebox(0,0){$\vdots$}} \put(60,40){\circle*{4}} \put(110,40){\circle*{4}} \put(170,40){\circle*{4}} \put(250,40){\circle*{4}} \put(310,40){\circle*{4}} \put(370,40){\circle*{4}} \put(110,-20){\circle{4}} \put(170,-20){\circle*{4}} \put(250,-20){\circle*{4}} \put(310,-20){\circle*{4}} \dashline[3]{3}(60,40)(30,25) \put(60,40){\line(-2, 1){30}} \put(60,40){\line(1,0){70}} \dashline[3]{3}(135,40)(145,40) \put(150,40){\line(1,0){40}} \put(230,40){\line(1,0){40}} \dashline[3]{3}(275,40)(285,40) \put(290,40){\line(1,0){40}} \dashline[3]{3}(335,40)(345,40) \put(350,40){\line(1,0){20}} \put(170,40){\line(0,-1){20}} \dashline[3]{3}(170,15)(170,5) \put(170,0){\line(0,-1){20}} \dashline[3]{3}(110,40)(110,-20) \put(250,40){\line(0,-1){20}} \dashline[3]{3}(250,15)(250,5) \put(250,0){\line(0,-1){20}} \put(310,40){\line(0,-1){20}} \dashline[3]{3}(310,15)(310,5) \put(310,0){\line(0,-1){20}} \put(45,41){\makebox(0,0){$b$}} \put(102,45){\makebox(0,0){$p$}} \put(118,33){\makebox(0,0){$q$}} \put(65,45){\makebox(0,0)[l]{$a$}} \put(208,40){\makebox(0,0){$\ldots$}} \put(60,60){\makebox(0,0){$E_i$}} \put(110,60){\makebox(0,0){$E_j$}} \end{picture} \begin{proof} Let $q$ be the decoration or the product of the two decorations at $E_j$, not on the edge between $E_i$ and $E_j$. By Proposition \ref{N-theorem} we have that $N_i = La$ and $N_j=Lq$, where $L$ is the total contribution in formula (\ref{N-formula}) of all arrows in $\Gamma$. Since $ap-bq=1$, we derive that $pN_i -bN_j = L$ and hence that $d \mid L$. By Theorem \ref{nu-theorem} we have that $\nu_i \leq a-b$ and consequently $$\frac{\nu_i}{N_i} \leq \frac{a-b}{La} < \frac a{La} = \frac 1L \leq \frac 1d .$$ \end{proof} \begin{example}[continuing Example \ref{example}] The vertices $E_5$ and $E_3$ form an illustration of Lemma \ref{lemma2} with $d=2$. \end{example} \begin{lemma}\label{lemma3} Let $E_1$ be an end vertex of $\Gamma$, such that $E_1, E_2, \dots, E_r$ form a chain in $\Gamma$ (with $r\geq 2$). Let $d \in \mathbb Z_{\geq 2}$ such that $d \mid N_r$ and $d \mid N_{r-1}$. Then \begin{equation}\label{inequality3} \frac{\nu_r}{N_r} \leq \frac{ \nu_{r-1} + 1/d}{ N_{r-1} + 1} . \end{equation} \end{lemma} \begin{picture}(230,85)(-45,0) \put(40,43){\makebox(0,0){$\vdots$}} \put(60,40){\circle*{4}} \put(115,40){\circle*{4}} \put(215,40){\circle*{4}} \put(270,40){\circle*{4}} \dashline[3]{3}(60,40)(30,25) \put(60,40){\line(-2, 1){30}} \put(60,40){\line(1,0){80}} \put(190,40){\line(1,0){80}} \put(54,51){\makebox(0,0){$b_r$}} \put(101,46){\makebox(0,0){$b_{r-1}$}} \put(128,45){\makebox(0,0){$a_{r-1}$}} \put(65,45){\makebox(0,0)[l]{$a_r$}} \put(205,46){\makebox(0,0){$b_2$}} \put(260,46){\makebox(0,0){$b_{1}$}} \put(220,45){\makebox(0,0)[l]{$a_2$}} \put(165,40){\makebox(0,0){$\ldots$}} \put(60,20){\makebox(0,0){$E_r$}} \put(115,20){\makebox(0,0){$E_{r-1}$}} \put(215,20){\makebox(0,0){$E_2$}} \put(270,20){\makebox(0,0){$E_{1}$}} \end{picture} \noindent {\em Note.} The condition $d \mid N_r$ and $d \mid N_{r-1}$ is equivalent to for instance $d \mid N_1$ and to $d \mid N_i$ for all $i=1\dots,r$. \begin{proof} It is well known (see e.g. \cite{V}) that $N_r= a_rN_1$ and $N_{r-1} = a_{r-1}N_1$ (it also follows from Proposition \ref{N-theorem}). By Theorem \ref{nu-theorem} we have $$ \nu_r = b_r +xa_r \qquad\text{and}\qquad \nu_{r-1} = b_{r-1} +xa_{r-1} ,$$ where $x=1$ or $x \in \mathbb Z_{<0}$, in particular $x \leq 1$. Substituting these equalities in (\ref{inequality3}) yields, after a straightforward calculation, $$ b_r \leq N_1(a_rb_{r-1} - b_ra_{r-1}) + (\frac{N_1}d -x) a_r = N_1 + (\frac{N_1}d -x) a_r . $$ Since $x \leq 1$ and $d \mid N_1$, this will be implied by \begin{equation}\label{inequality} b_r \leq N_1 . \end{equation} Say the chain above ends in a vertex $E_n$ of valency at least $3$ in $\Gamma$ (where $n\geq r$). It is well known and easily verified that $ a_{i+1} \geq a_i +1$ for $i=1,\dots,n-1$, where $a_1=1$ (it follows from the fact that, in the minimal embedded resolution, all self--intersections $E_1^2,\dots, E_{n-1}^2$ are at most $-2$). Then, by an elementary calculation, this implies $b_{i} \leq b_{i+1} $ for $i=1,\dots,n-1$ and in particular $b_{r} \leq b_n$. \begin{picture}(230,80)(-15,0) \put(40,43){\makebox(0,0){$\vdots$}} \put(60,40){\circle*{4}} \put(115,40){\circle*{4}} \put(185,40){\circle*{4}} \put(245,40){\circle*{4}} \put(315,40){\circle*{4}} \put(370,40){\circle*{4}} \put(60,40){\line(-2,-1){30}} \put(60,40){\line(-2,1){30}} \put(60,40){\line(1,0){75}} \put(165,40){\line(1,0){100}} \put(295,40){\line(1,0){75}} \put(54,51){\makebox(0,0){$b_n$}} \put(100,46){\makebox(0,0){$b_{n-1}$}} \put(128,45){\makebox(0,0){$a_{n-1}$}} \put(65,45){\makebox(0,0)[l]{$a_n$}} \put(175,46){\makebox(0,0){$b_r$}} \put(233,46){\makebox(0,0){$b_{r-1}$}} \put(256,45){\makebox(0,0){$a_{r-1}$}} \put(187,45){\makebox(0,0)[l]{$a_r$}} \put(305,46){\makebox(0,0){$b_2$}} \put(360,46){\makebox(0,0){$b_{1}$}} \put(320,45){\makebox(0,0)[l]{$a_2$}} \put(150,40){\makebox(0,0){$\ldots$}} \put(280,40){\makebox(0,0){$\ldots$}} \put(60,20){\makebox(0,0){$E_n$}} \put(115,20){\makebox(0,0){$E_{n-1}$}} \put(185,20){\makebox(0,0){$E_r$}} \put(245,20){\makebox(0,0){$E_{r-1}$}} \put(315,20){\makebox(0,0){$E_2$}} \put(370,20){\makebox(0,0){$E_{1}$}} \end{picture} We claim that $b_n \leq N_1$, which then implies (\ref{inequality}). When $b_n=1$, this is trivial. Otherwise the decorations along the other edges adjacent to $E_n$ are $1$, implying that, in the direction of such an other edge away from $E_n$, there is at least one arrow. And then Proposition \ref{N-theorem} yields that $N_1 \geq b_n$. \end{proof} \begin{example}[continuing Example \ref{example}] The end vertices $E_1$ with $d=4$, $E_2$ with $d=6$, and $E_4$ with $d=13$, form three different illustrations of Lemma \ref{lemma3}. Each time the length $r$ of the chain is just $2$. Here the inequalities are not sharp. (The minimal embedded resolution of $f=y^2-x^3$ provides two sharp examples.) \end{example} \begin{lemma}\label{lemma4} Let $E_i$ be a vertex of the graph $\Gamma$ of valency at least $3$, where two chains are attached to $E_i$ with end vertices $E_1$ and $E_2$, respectively. Let $d \in \mathbb Z_{\geq 2}$ such that $d \mid N_1$ and $d \mid N_2$, and hence $d \mid N_i$. Then $\nu_i/N_i < 1/d $. \end{lemma} \begin{picture}(500,65)(50,-20) \put(220,23){\makebox(0,0){$\vdots$}} \put(240,20){\circle*{4}} \dashline[3]{3}(240,20)(210,5) \put(240,20){\line(-2, 1){30}} \put(220,23){\makebox(0,0){$\vdots$}} \put(240,20){\line(4,1){25}} \dashline[3]{3}(272,28)(288,32) \put(320,40){\line(-4,-1){25}} \put(240,20){\line(4, -1){25}} \dashline[3]{3}(272,12)(288,8) \put(320,0){\line(-4,1){25}} \put(320,40){\circle*{4}} \put(320,0){\circle*{4}} \put(237,7){\makebox(0,0){$E_i$}} \put(334,40){\makebox(0,0){$E_1$}} \put(334,0){\makebox(0,0){$E_2$}} \put(250,30){\makebox(0,0){$a$}} \put(250,10) {\makebox(0,0){$b$}} \end{picture} \noindent {\em Note.} The conditions $d \mid N_1$ and $d \mid N_2$ imply that $d \mid N_\ell$ for all $E_\ell$ in both chains. \begin{proof} By Proposition \ref{N-theorem} we have that $$N_i = Lab, \quad N_1=Lb, \quad N_2=La,$$ where $L$ is the total contribution in formula (\ref{N-formula}) of the arrows \lq on the left of $E_i$\rq. Since $a$ and $b$ are coprime, we derive that $d \mid L$. By Theorem \ref{nu-theorem}, we have that $\nu_i = a +b$. Hence $$ \frac{\nu_i}{N_i} = \frac{a+b}{Lab} < \frac 1L \leq \frac 1d ,$$ where the inequality follows from $a,b > 1$. \end{proof} \begin{example}[continuing Example \ref{example}] The vertex $E_3$ has two such attached chains and illustrates Lemma \ref{lemma4} with $d=2$. \end{example} We now show Theorem \ref{main theorem}. \begin{proof} (1) If $E_i$ satisfies the condition of Lemma \ref{lemma1}, then $\nu_i/N_i \leq 1/d$. Otherwise, there is only one \lq arrow direction\rq\ starting from $E_i$. When $E_i$ has valency $1$ (in $\Gamma$), we can consider it as the vertex $E_1$ in Lemma \ref{lemma3}, and then its adjacent vertex $E_2$ satisfies $$\frac{\nu_2}{N_2} \leq \frac{ \nu_{1} + 1/d}{ N_{1} + 1} .$$ When $E_i$ has valency at least $2$, we can either consider it as the vertex $E_{r-1}$ of Lemma \ref{lemma3}, or it must be in the situation of Lemma \ref{lemma2} or Lemma \ref{lemma4}. In all these cases the conclusion holds. (2) If $E_i$ and $E_j$ satisfy the condition of Lemma \ref{lemma1}, both $\nu_i/N_i \leq 1/d$ and $\nu_j/N_j \leq 1/d$. Otherwise they satisfy either the condition of Lemma \ref{lemma2} or they can be identified with the vertices $E_r$ and $E_{r-1}$ in the chain of Lemma \ref{lemma3}, implying one of the desired inequalities. \end{proof} \end{document}
arXiv
Search SpringerLink Dimensionality, Coordinate System and Reference Frame for Analysis of In-Situ Space Plasma and Field Data Q. Q. Shi ORCID: orcid.org/0000-0001-6835-47511, A. M. Tian1, S. C. Bai1, H. Hasegawa2, A. W. Degeling1, Z. Y. Pu3, M. Dunlop4, R. L. Guo5, S. T. Yao1, Q.-G. Zong3, Y. Wei5, X.-Z. Zhou3, S. Y. Fu3 & Z. Q. Liu6 Space Science Reviews volume 215, Article number: 35 (2019) Cite this article In the analysis of in-situ space plasma and field data, an establishment of the coordinate system and the frame of reference, helps us greatly simplify a given problem and provides the framework that enables a clear understanding of physical processes by ordering the experimental data. For example, one of the most important tasks of space data analysis is to compare the data with simulations and theory, which is facilitated by an appropriate choice of coordinate system and reference frame. While in simulations and theoretical work the establishment of the coordinate system (generally based on the dimensionality or dimension number of the field quantities being studied) and the reference frame (normally moving with the structure of interest) is often straightforward, in space data analysis these are not defined a priori, and need to be deduced from an analysis of the data itself. Although various ways of building a dimensionality-based (D-based) coordinate system (i.e., one that takes account of the dimensionality, e.g., 1-D, 2-D, or 3-D, of the observed system/field), and a reference frame moving along with the structure have been used in space plasma data analysis for several decades, in recent years some noteworthy approaches have been proposed. In this paper, we will review the past and recent approaches in space data analysis for the determination of a structure's dimensionality and the building of D-based coordinate system and a proper moving frame, from which one can directly compare with simulations and theory. Along with the determination of such coordinate systems and proper frame, the variant axis/normal of 1-D (or planar) structures, and the invariant axis of 2-D structures are determined and the proper frame velocity for moving structures is found. These are found either directly or indirectly through the definition of dimensionality. We therefore emphasize that the determination of dimensionality of a structure is crucial for choosing the most appropriate analysis approach, and failure to do so might lead to misinterpretation of the data. Ways of building various kinds of coordinate systems and reference frames are summarized and compared here, to provide a comprehensive understanding of these analysis tools. In addition, the method of building these systems and frames is shown not only to be useful in space data analysis, but also may have the potential ability for simulation/laboratory data analysis and some practical applications. In physics studies, the establishment of two systems is fundamental: one is the reference frame of a system relative to the observer and another is the coordinate system. A coordinate system establishes the orientation of an observed object/field in space, and a reference frame (with defined velocity) establishes its motion. An appropriate reference frame and coordinate system may help us greatly simplify a given problem, perform calculations more easily, make experimental data more ordered and enable a clearer understanding of physical processes. This is especially vital in space plasma theory, simulation and data analysis. In both theoretical analysis and numerical simulations, the coordinate system and the reference frame are chosen a priori. For example, in the theoretical analyses of Kelvin-Helmholtz waves (e.g., Pu and Kivelson 1983), or tearing mode configurations (e.g., Terasawa 1983), the physical fields are set to be 2-D and then the coordinate system is naturally based on the dimensionality (dimension number, number of spatial degrees of freedom required to describe a field, i.e., whether it is 1-D, 2-D or 3-D) based coordinate system (hereafter we refer to as a 'D-based coordinate system'). Examples in simulation work include 2-D simulations of magnetic reconnection (e.g., Lin and Swift 1996; Birn and Hesse 2001; Daughton et al. 2009) or 1-D simulations of plasma processes (e.g., Dawson 1983), which have all used a D-based coordinate system. In these studies, the reference frame is just the frame moving along with the structure, e.g., the current sheet or a flux rope. For the discussion of coordinate systems, in this article we mainly focus on a local Cartesian coordinate system which varies with position. Global coordinate systems can be seen in the review papers by Song and Russell (1999) and Kivelson and Russell (1995). As we have mentioned, in most simulation and theoretical analysis, the natural coordinate system choice is D-based. We should emphasize the definition of dimensionality (dimension number) here. Dimensionality is a basic concept in plasma physics and ordinary fluid dynamics. In most physical problems we only care about the variation of physical fields, and therefore we use the spatial variation of field quantities instead of the field quantity itself to define the dimensionality. For example, say we have a (scalar or vector) field quantity with a given structure in 3 dimensional Cartesian space \(\varphi =\varphi ( x,y,z )\). If the field quantity varies in only one direction (say, \(x\)), such that \(\partial \varphi /\partial y = \partial \varphi /\partial z = 0\) (for each Cartesian component, if it is a vector), then we have \(\varphi = \varphi (x)\) and the structure is one-dimensional (1-D). If there is no change along only one direction (say \(z\)), such that \(\partial \varphi / \partial z = 0\), then \(\varphi = \varphi (x,y)\) (i.e., the physical field varies in the \(x\)–\(y\) plane) and the field/structure is two-dimensional (2-D). In this case the \(z\)-direction is known as the invariant axis of the structure. If one cannot find any invariant directions, the field/structure is three dimensional (3-D). For both 1-D and 2-D cases, we allow the existence of all three components of a field quantity. Based on dimensionality, we can establish a local Cartesian coordinate system. This D-based coordinate system has a very clear physical meaning. For example, a flux rope or a flux transfer event (e.g., Russell 1995) is often a 2-D structure and all fields vary little along its axis. Then, if we can determine its dimension number, the invariant axis, which corresponds to the axis of the flux rope, can be found. Another example is a 1-D current sheet (often seen at the magnetopause and the magnetotail or in a shock front) in which all field quantities vary only along its normal direction. If we have determined its dimension number, then the only variation direction is found and it corresponds to just the normal of the current sheet. After we have obtained data from space instruments, we often hope to interpret it through some theoretical work or numerical simulations that are expressed in a D-based coordinate system. However, the data are obtained initially in the spacecraft frame. For example, for a spin-stabilized satellite, one axis is the spin axis and the other two axes are in the plane perpendicular to the spin axis. Then, if we know the direction of the spin axis in the Earth's frame we can transform the data to a global coordinate system such as the Geocentric Solar Ecliptic (GSE) or Geocentric Solar Magnetospheric (GSM) coordinates. This step of axis transformation is usually not very difficult and normally is provided in the scientific data of the satellite mission. However, if we intend to analyze the data in a D-based coordinate system, it is not very straight forward. We require a general method to identify this coordinate system through analyzing the data itself. For a 1-D structure such as a current sheet, finding its normal forms the basis of building a D-based coordinate system, because the normal is the only variation direction of the 1-D structure. To completely build a Cartesian coordinate system, we need the other two axes, which can be any two orthogonal directions that are in the plane perpendicular to the normal. Over the past years of space data analysis, various attempts to define such coordinates have been made. The first systematic and quantitative method for establishing a Cartesian coordinate system was the minimum variance analysis (MVA) method proposed by Sonnerup and Cahill (1967). It is undoubted that the coordinate system established by using the method of MVA (Sonnerup and Cahill 1967) has played, and will continue to play, a key role in single and multiple satellite data analysis. Methods like the Timing (e.g., Russell et al. 1983) or coplanarity (e.g., Schwartz 1998) can also be used to find the normal for building a D-based coordinate system for 1-D cases. For a 2-D structure such as a flux rope, because its axis is the invariant direction, finding the flux rope axis is the first step to build a D-based coordinate system. After we have determined the axis, the other two axes can be any two orthogonal directions that are in the plane perpendicular to the axis, and we can choose one direction as the projection of the spacecraft path in this plane and the last axis of this D-based coordinate system completes the right hand orthogonal set. D-based coordinate systems are a kind of local coordinate system. Other kinds of local coordinate systems such as the local field aligned coordinate system, used when studying some waves in the magnetosphere (e.g., Hartinger et al. 2011; Shi et al. 2013, 2014) will not be discussed in detail in this article. For the description of processes taking place in space, one must use a reference frame. A good frame of reference is often the frame that is moving along with the structure in space, within which one can analyze the physical processes (note that the "structure" of interest, e.g. a magnetic flux rope, may not be simply moving with the plasma flow). In theory/simulation work, for example, to study a flux tube, reconnection point, or current sheet characteristics, we often need to study the electromagnetic fields and plasma/particle dynamics in the reference frame moving along with the structure. Then the mass, momentum and energy conservation equations can more easily be solved in this structure rest-frame. When we intend to link the data to the physical parameters obtained from theory/simulation, or to give the observed phenomena a physical explanation, if we do not use the same frame, the explanation will be very difficult. However, the measured plasma and electromagnetic field data are gathered in the spacecraft frame, and in most cases of interest the structure moves with respect to the spacecraft. One simple and direct consideration is that if we could find the structure velocity, we then will be able to obtain the reference frame. When the velocity of the structure is determined, one kind of reference frame is established. So, as a first step, it is important to determine the proper motion speed of the structure. In the non-relativistic approach (in which the magnetic field is independent of the observational frame of reference, but the electric field and plasma velocity are different in different frames), the time variation of a physical field quantity, \(\varphi \) (which can be the density, temperature, pressure or magnetic field magnitude etc., or one component of a vector field), in the observer frame (normally this can be either the spacecraft or the instrument frame), is ∂ φ ∂ t | obs = ∂ φ ∂ t | str − V ⇀ str ⋅∇φ where we use the subscript 'obs' to indicate a partial-derivative in the observer frame (or spacecraft frame), and 'str' to indicate variation in the frame moving along with the structure. \(V\)str in (1.1) is the structure velocity relative to the observer. This equation means that the time variation of the observed \(\varphi \) can be caused either by the temporal variation (first term on the right) or the spatial variation (second term on the right), or both. In fact equation (1.1) can be derived from the material derivative expanded in the Euler description of a fluid, d φ d t = ∂ φ ∂ t + V ⇀ ⋅∇φ, where \(\frac{\mathrm{d}\varphi }{ \mathrm{d}t}\) is the material derivative (also called substantial derivative or particle derivative) that describes the time variation in the frame moving with the material particle, \(\frac{\partial \varphi }{ \partial t}\) is the local derivative representing the time variation in the observer frame and V ⇀ ⋅∇φ is the convective derivative. Because the local derivative and the convective derivative are much easier to measure/observe than the material derivative, in practice we normally use the former two to describe the physical processes, although many physical laws like momentum conservation are more conveniently described under a Lagrangian description using material derivatives. In space, the structure can be analogous to the material particle and then we obtain equation (1.1). For all these time derivatives, we use partial derivative with subscripts indicating the frame instead of using partial derivative or total derivative, because the only difference between partial derivative or total derivative here is the frame difference and in different points of view the partial derivative and total derivative can be changed into each other. For example, the position of the symbol '\(d\)' and '\(\partial \)' we used in this equation d φ d t = ∂ φ ∂ t + V ⇀ str ⋅∇φ are opposite to that in (1.1) in Song and Russell (1999) but the equation we are using is the same, which means the symbol '\(d\)' and '\(\partial \)' themselves have no physical difference here. Therefore to state clearly and avoid unnecessary confusions, we use the '\(\partial \)' to replace the '\(d\)' and use different subscripts ('obs' or 'str') to distinguish temporal variations in different reference frames. Then, when we intend to use a theory or simulation which is described in the 'str' frame to interpret observations that are measured in the spacecraft 'sc' frame, equation (1.1) provides a way of frame transformation. In summary, while in simulations and theoretical work the establishment of the coordinate system and the reference frame is straightforward, the determination of these from space data needs some more analysis. In this article, we will review the methods of establishing the D-based coordinate system and ways of constructing the proper frame of reference that are used in the space data analysis, and their application in the analysis on structures measured in space. Over the past twenty years, especially following the launch of the ESA Cluster constellation, many multi-point methods have been developed and applied to study space physics processes. Applications of these techniques are critically dependent on correct determination of structure dimensionality, principle axis and velocity. Nevertheless, we find that in some cases some techniques were not quite appropriately applied and may have affected their conclusions. Now that the new NASA constellation Magnetospheric Multiscale (MMS) is operational; it is necessary and timely to clarify these problems and further develop new multi spacecraft data analysis techniques. In this review paper, the determination of dimensionality analysis, principle axis and velocity determination will be discussed, and also other related methods, such as some single satellite methods will be summarized and compared. Applications on reconstruction techniques for magnetic flux ropes, current sheets and other magnetic structures will be introduced, and, we will review the conditions under-which each method should be most appropriately and effectively used. In both Sects. 2 and 3 we will first review single spacecraft-based methods, and then multi-point methods, including some traditional methods and their continuing development, and novel approaches developed very recently. In Sect. 2, we will review six methods for building D-based coordinate systems. While all of these methods can build a D-based coordinate system for 1-D structures, some methods may no longer be D-based when applied to higher dimensional structures. In some of these (but not all) cases, this may be rectified by a simple axis rotation. In Sect. 3 we will review the frame of reference in which the observer resides. We make the argument that finding the observational frame is dependent upon/closely related to finding the D-based coordinate system. For example, the application of the traditional Triangulation/Timing methods (Russell et al. 1983) to 2-D structures obtains both the reference frame and a D-based coordinate system together. In Sect. 4 we discuss some uncertainties and cautions in using some important methods. Since in data analysis different field quantities might have different features, we also discuss the dimensionality for different field quantities. Then we compare all the methods discussed, and show where they can be best applied. We emphasize that different methods will have their best application in different circumstances. Therefore, we advise in many cases to use different methods for the same event to compare and obtain a more reliable coordinate system and reference frame. Lastly we show some potential applications of some gradient methods in simulation and other circumstances. D-Based Coordinate Systems As we have mentioned Sect. 1, dimensionality based coordinate system is very commonly used in the numerical simulation and theoretical analysis of 1-D or 2-D problems. In the data analysis of in situ observations, 2-D or 1-D problem is often much easier to study and compare with numerical or theoretical analysis than a three dimensional (3-D) one. It is important, therefore, to pre-determine the dimensionality (dimension number) and characteristic directions of observed space structures before proceeding with further data analysis. In addition, a reduced dimension number is an assumption of many analysis methods. For example, the widely used MVA method (Sonnerup and Scheible 1998), the multi-spacecraft-timing method and its later revision (e.g., Russell et al. 1983; Zhou et al. 2006a, 2006b; Zhou et al. 2009), and Grad–Shafranov (GS) reconstruction methods (Hau and Sonnerup 1999; Hu and Sonnerup 2002; Sonnerup et al. 2006; Tian et al. 2010, 2014) are all set up for 1-D or 2-D structures. However, even for commonly identified structures the dimension number is not always as expected. For example, the magnetopause current sheet sometimes is not 1-D but has some small 2-D structures embedded (e.g., Sonnerup and Guo 1996). Magnetic flux ropes, which are generally regarded as 2-D, may actually be 3-D. In such situations the GS method is not applicable. Therefore, the examination of the structure dimension number with multi-points data is desirable. In the past and recent years, the dimensionality based coordinated system in data analysis has been established in various cases. In this Section first we will introduce some traditional and new single spacecraft methods, followed by a review of some multi spacecraft approaches including a method directly through the definition of dimensionality (dimension number). The comparison of the methods will be made in Sect. 4.3. Sonnerup-Cahill Minimum/Maximum Variance Analysis (MVA) Based Coordinate System The first systematic and quantitative method for establishing a Cartesian coordinate system is the minimum variance analysis (MVA) method proposed by Sonnerup and Cahill (1967). The MVA based coordinate system is a kind of principle-axes coordinate system. It is the most commonly used method to analyze current layers (e.g., magnetopause, shock, or tail current sheet) and was developed using magnetic field measurements in near-Earth space (Sonnerup and Cahill 1967). This method is easy to understand and implement, and it can always provide a Cartesian coordinate system, which makes it very powerful and useful in the space data analysis community. There is no doubt that coordinate systems established by using the method of MVA (Sonnerup and Cahill 1967) have played and will continue to play an indispensable role in single satellite data analysis and still in multi-satellite data analysis. For 1-D cases such as a current sheet, there is only a single variation direction (the normal direction) and this can be found using MVA analysis, leading directly to the construction of a D-based coordinate system. For 2-D cases, the construction of a D-based coordinate system is indirect. Here is one approach (see details in Sect. 2.2): first we build a coordinate system using the three eigenvectors, \(L\), \(M\) and \(N\), which indicate maximum, intermediate and minimum variance directions from the MVA method. Then we can rotate any of them to obtain the invariant axis using the method mentioned in Hu and Sonnerup (2002). In principle we do not need these \(L\), \(M\) or \(N\)—we can just use an arbitrary direction as an initial guess and then rotate it to obtain the invariant axis using a minimization procedure. Once we have determined the invariant axis, the construction of the D-based coordinate system is almost complete. When it was first introduced, the MVA method was based on the assumption that the boundary is 1-D (Sonnerup and Scheible 1998), such that the magnetic field along the normal of the 1-D structure does not vary either temporally or spatially (this requires that both the magnetic and electric fields should be 1-D, i.e., they only vary along one direction). For many 2D or 3D structures it is also very useful to provide a local coordinate system (not D-based), although sometimes the physical interpretation of the original axes may not be very clear. This method gives three orthogonal axes based on the magnetic field measurements not at one time moment, but a time interval between two observational time points, which are selected arbitrarily but sufficiently far apart to use enough sampled data. Then in some cases using different time intervals one may obtain different axes, indicating finer scale structure; for example, when there are some sublayers within a current sheet. A detailed introduction of the method can be found in (Sonnerup and Scheible 1998). The starting point of the MVA method is this: for a 1-D magnetic structure, the condition that \(\nabla \cdot \vec{B} =0\) implies, for a suitably rotated set of coordinate unit vectors \(( \vec{n}_{1}, \vec{n}_{2}, \vec{n}_{3} )\), that \({\partial B_{n1}} / \partial n_{1} = {\partial B_{n2}} / {\partial n_{2}} = {\partial B_{n3}} / {\partial n_{3}} = 0\). This is because, for a 1-D structure, variations in all components of \(\vec{B}\) must already be zero in two of the three directions (e.g. \(\vec{n}_{1}\) and \(\vec{n}_{2}\); \(\partial / \partial n_{1} = \partial / \partial n_{2} = 0\)), therefore ∇⋅ B ⇀ =∂ B n 3 /∂ n 3 =0 for the third direction (Actually this is the coordinate system which can be determined by the three eigenvectors of the minimum directional derivative (MDD) method described in Sect. 2.4). This means that \(B_{n3}\) does not change along the direction n ⇀ 3 . Similarly, \(\nabla \times \vec{E} =0 \) in the direction of n ⇀ 3 , if \(\vec{E}\) is also 1-D (Note that when \(\vec{B}\) is 1-D, it is not always guaranteed that \(\vec{E}\) is also 1-D, as discussed in Sect. 4.2). According to Faraday's law ∂ B ⇀ /∂t=−∇× E ⇀ , we obtain that \(\partial B_{n3}/\partial t = 0\), namely \(B_{n3}\) does not change with time. Then, for a 1-D structure, \(B_{n3}\) does not vary either in time or space, that is, it is always constant. To find the direction of n ⇀ 3 which makes \(B_{n3}\) nearly constant, we can use a set of (\(N\)) magnetic field measurements over a given time interval to calculate the variance of the magnetic field in a direction \(\vec{n}\), given by: \(\sigma _{n}^{2} = \frac{1}{N} \sum_{i=1}^{N} ( B_{n} ( i ) - \langle B_{n} ( i ) \rangle ) ^{2}\) (where \(B_{n} ( i ) = \vec{B} ( i ) \cdot \vec{n}\)) and find the direction \(\vec{n}\) that minimizes \(\sigma _{n}^{2}\). In practice, as shown in detail by Sonnerup and Scheible (1998), this can be expressed as a conditional minimization/maximization problem and can be solved by calculating the eigenvalues and eigenvectors of a symmetric matrix M ↔ =〈 B ⇀ B ⇀ 〉−〈 B ⇀ i 〉〈 B ⇀ j 〉. The three eigenvalues of M ↔ (\(\lambda _{1}\), \(\lambda _{2}\), \(\lambda _{3}\)) are real and the eigenvectors \(L\), \(M\) and \(N\) are perpendicular to each other. Then the three eigenvectors build a new coordinate system. If the satellite passes through a 1-D structure this is indicated by (\(\lambda _{1}\), \(\lambda _{2} \gg \lambda _{3}\) (the converse of this may not be true, see Sonnerup and Scheible (1998) and the discussion below in this section). In this case the normal direction of this 1-D structure is just along the eigenvector corresponding to the minimum eigenvalue, n ⇀ 3 . Then, for a 1-D case the three orthogonal eigenvectors derived from M ↔ build a D-based coordinate system. But if \(\lambda _{1}\), \(\lambda _{2} \gg \lambda _{3}\), the structure is not necessarily 1-D, see Sonnerup and Scheible (1998) and the discussion below in this section. Note that even for 2-D or 3-D structures, as the matrix M ↔ =〈 B ⇀ B ⇀ 〉−〈 B ⇀ i 〉〈 B ⇀ j 〉 is symmetrical, we can always obtain three eigenvalues which are real and three eigenvectors perpendicular to each other corresponding to the three eigenvalues. Therefore, even for a 2-D or 3-D structure, in practice we are still able to calculate three the eigenvectors and eigenvalues. Therefore, the MVA analysis essentially simplifies the problem, and can always help us find a new available coordinate system. It should also be noted that because in 2-D or 3-D problems, the coordinate system is not D-based any more, the meaning of each axis is not necessarily clear, and should be evaluated case by case. As described in Sonnerup and Scheible (1998), from MVA methods one cannot know whether the structure is one/two-dimensional or not. For example, for a 2-d flux tube, the three directions of the MVA methods cannot always strictly denote the axial direction of the flux tube. Using simulated flux ropes in a 3-D MHD simulation, Xiao et al. (2004) found that for different virtual satellite crossing paths, the axial direction of flux tube is close to different eigenvector directions of the MVAB method. It is suggested to use MVAJ to help determine the axial direction, because practically a flux rope has a very strong electric current along the axis (Xiao et al. 2004; Haaland et al. 2004). Then the maximum variation of the current should be along the axis. As shown in Fig. 1, we tested the ability of MVAB (Fig. 1b, c, and d) and MVAJ (Fig. 1e, f, and g) methods in determining the axial direction in a magnetic field generated from a self-consistent 2D flux rope model. In this model, the axial field \(B_{z}\) can be taken as different functions of magnetic potential A, corresponding to flux ropes with different structure (e.g. Tian et al. 2019). The red lines in Fig. 1a represent 25 test paths with different impact parameter (IP), the minimum distance between the path and the axis center, in the cross section of the flux rope. The second, third and fourth rows show angles between \(L\), \(M\), \(N\) and the true invariant \(z\)-axis for three flux ropes with different \(B_{z}\), respectively. We find that the \(M\) (minimum variation direction) and \(L\) (maximum variation direction) vectors are close to the invariant axis (i.e., less than 30∘) for MVAB and MVAJ, respectively, only when the impact parameter (IP) is close to zero for flux ropes with non-zero axial fields. Nevertheless, MVAJ has been successfully used in the analysis of data from the Cluster mission (Escoubet et al. 2001) to determine the axis of a flux rope (e.g., Pu et al. 2005) and a discontinuity (Haaland et al. 2004; Rezeau et al. 2018). The results of MVA also depend on the model of the structure (Tian et al. 2019; also see Lepping et al. 1990 for a similar experiment). For example, when the axial field is zero, \(N\) from MVAB or \(L\) from MVAJ can well characterize the axis direction (Fig. 1d and g). (a) The cross section of a model magnetic flux rope. Magnetic field lines are plotted with black lines and 25 paths for MVA tests are over-plotted with red lines. (b–d) shows the angles between the eigenvectors \(\mathbf{L}\), \(\mathbf{M}\) and \(\mathbf{N}\) of MVAB analysis and the true invariant \(z\) axis for three types of flux rope (i) \(p= \frac{e^{-2A}}{3 \mu _{0}}\), \(B _{z} = \frac{e^{-A}}{\sqrt{3}}\), representing a normal flux rope; (ii) \(p=0\), \(B_{z} = e^{-A} \), representing a forcefree flux rope; and (iii) \(p= \frac{e^{-2A}}{2 \mu _{0}}\), \(B_{z} =0\), representing a magnetic island, where \(p\) denotes plasma pressure, \(A\) is magnetic potential and \(B_{z}\) is the axial magnetic field. (e–g) have the same format as in b–d but for MVAJ analysis (adopted from Tian et al. 2019). The vector with the smallest angle with \(z\) is closest to the actual axial direction of the flux rope model The MVA methods can not only be used to analyze magnetic field and current density, but also can be applied to the electric field (Sonnerup and Scheible 1998), mass flow ρ V ⇀ (e.g., Sonnerup and Scheible 1998; Zhao et al. 2016), velocity vector V ⇀ (e.g., Knetter 2005; Ling et al. 2018), and other vector fields. For a 1-D structure, as we mentioned above, the magnetic field along normal does not vary with both time and space. Theoretically, this is not valid for 2-D or 3-D structures but we can still perform the MVA on a time series of data to obtain a coordinate system which is in many cases better than the original system for the problem we need to analyze. When one uses the GS reconstruction method (e.g., Sonnerup and Guo 1996; Sonnerup et al. 2006; Hasegawa et al. 2007; Tian et al. 2014, 2019) to reconstruct a flux tube, as it is a requirement to have a sufficiently accurate axis, the minimum or medium variation direction from MVA needs to be rotated to an angle to approach the real axial direction of the flux tube (e.g. Hu and Sonnerup 2002); then a D-based coordinate system is built. In this way, MVA can act as an indirect way to build a D-based coordinate system. A GUI interface for the MVA method can now be accessed in the Space Physics Environment Data Analysis System (SPEDAS). D-Based Coordinate System for a 2-D Structure Based on Grad–Shafranov Reconstruction Method The theories of series of Grad–Shafranov (GS) or MHD reconstruction methods are 2-D based in a D-based coordinate system, and the reconstructed plane is chosen to be the plane perpendicular to the invariant axis (e.g. Hau and Sonnerup 1999; Hu and Sonnerup 2002; Hasegawa et al. 2007; Teh et al. 2007; Sonnerup et al. 2006, 2016; Sonnerup and Teh 2008; Hasegawa et al. 2017). This reconstruction method has been applied to 2-D flux ropes (e.g., Hau and Sonnerup 1999; Hu and Sonnerup 2002; Hasegawa et al. 2007), the magnetopause current sheet (e.g., Hasegawa et al. 2004), reconnection structures (Teh et al. 2010), and drift mirror structures (e.g., Tian et al. 2012). The construction of a D-based coordinate as the first step is essential to the whole reconstruction. The invariant axis can be determined and then the D-based coordinate system can be established by the GS technique if the structure encountered is 2-D, time independent and magnetohydrostatic (Sonnerup et al. 2006). To obtain the axis through this technique one needs a reference frame, which can be obtained through the methods discussed in Sect. 3. For such a structure, the three quantities, thermal pressure \(p\), the axial component of the magnetic field \(B_{z}\) and hence the transverse pressure \(P_{t} =p+ \frac{B_{z}^{2}}{2 \mu _{0}}\) are field line invariants. If the spacecraft trajectory intersects a field line in the 2D plane more than once, the field line invariant should have the same values of these quantities at each intersection point of a field line. Figure 2 shows the cross section of a Lundquist flux rope. The x axis is the trajectory of the spacecraft. The small circles and stars represent samples in the left and right half of the flux rope, respectively. \(A_{l}\) and \(A_{m}\) indicate the initial and maximum out-of-plane component of magnetic vector potential value. Hu and Sonnerup (2002) introduced a residue parameter \(\mathrm{RES}=[ \sum_{i=1}^{m_{0}} ( P_{t,i}^{1\mathrm{st}} - P_{t,i}^{2\mathrm{nd}} )^{2} ]^{\frac{1}{2}} /| \max ( P_{t} ) - \min (P_{t} )| \) to represent the degree of scatter of \(P_{t}\), where \(\mathrm{m}_{0}\) is the number of points interpolated between \(A_{l}\) and \(A_{m}\). By testing trial axes with axis directions varying over a hemisphere, the optimal axis can be found when RES has a minimum. The top panel shows the cross section of the Lundquist flux rope model centered at \((x, y)=(0, 0.5)\). The solid line along \(x\) axis is the projected spacecraft trajectory. The bottom panel shows the relationship between transverse pressure \(P_{t}(x,0)\) and magnetic potential \(A(x,0)\) for an incorrect \(z\) axis. The small circles denote data points collected by a virtual spacecraft in the inbound trajectory. The stars denote data points in the out bound trajectory. \(A_{l}\) and \(A_{m}\) are the magnetic potentials at the starting point and the point of the closest approach, respectively. \(A \in [A_{l}, A_{m}]\) is uniformly interpolated by \(m_{0}\) points with the index of \(i \in [1,m_{0}]\) for calculating the residue RES (adapted from Hu and Sonnerup 2002) Figure 3 shows the residue map for a magnetic flux rope crossing event observed by the Magnetospheric Multiscale (MMS) spacecraft (Burch and Phan 2016). The resolution of the search grid is 10 degrees in longitude direction and 5 degrees in latitude direction. This shows that except for the axis (\(L\)) from MVAJ, the axis directions estimated by other methods are very consistent with each other. It should be noted that the broader area encircled by the contour line in Fig. 3 indicates some uncertainty of this method, which is 1.5 times the minimum RES. However, for events in which many field lines are encountered only once, such as the magnetopause crossings, the above method will fail. Polar map of axis directions for the magnetic flux rope event on 15 Oct. 2015, see text for detail (adapted from Tian et al. 2019). Small dots indicate the search grid points on the hemisphere of unit radius. The '+' point on the pole indicates the minimum variance direction from MDD method. The direction deduced by minimum reside method of Hu and Sonnerup (2002) is marked with a triangle. An asterisk denotes the maximum variance direction from MVAJ. The square indicates medium variance direction from MVAB If multi-satellite data are available, the invariant axis can be obtained by trial and error in another way. Hasegawa et al. (2004) used the intermediate variance direction of the MVA analysis with the constraint of 〈 B ⇀ 〉⋅ n ⇀ =0 based on one satellite data, where n ⇀ is the minimum variance direction in MVA acting as the initial \(z\)-axis to conduct a GS reconstruction. The resulting optimal axis is the one for which the correlation coefficient between the magnetic fields reconstructed in the map and the fields actually observed by other spacecraft reaches a highest value. Figure 4 shows a case of magnetopause reconstruction. A high correlation coefficient of 0.9790 (Fig. 4b) between the predicted and measured magnetic fields suggests that the invariant axis is well determined. The high correlation coefficient also indicates that the conditions, i.e. 2D and stationary, are suitable for the GS technique. For Cluster observed events, Hasegawa et al. (2005, 2006) further showed that by ingesting data from all four Cluster spacecraft, four independent field maps, one for each spacecraft, can be reconstructed and then merged into an optimized GS map. (a) Reconstructed magnetic field map for a magnetopause crossing by the Cluster 1 spacecraft on 30 June 2001 (adapted from Hasegawa et al. 2004). Contours indicate the magnetic field lines projected onto the reconstruction plane and color shows the magnetic component along the invariant axis. The measured magnetic fields from all four spacecraft are overlapped on the plane with white arrows. (b) Correlation between the measured and recovered magnetic field components Multi-point Timing—Setting a D-Based Coordinate System for a 1-D Structure For 1-D structures, Timing methods can help to build a D-based coordinate system after finding the normal direction. Since one can get the normal and velocity at the same time from this approach, we will discuss it in detail in Sect. 3.3. There are a number of different versions of this method that differ based on their starting assumptions. For example, assuming the velocity is constant we have the CVA (Constant Velocity Approach: e.g., Russell et al. 1982; Knetter et al. 2004), while assuming the thickness is constant we get CTA (Constant Thickness Approach: Haaland et al. 2004). Other related approaches are DA (Discontinuity Analyzer, Dunlop and Woodward 1998) and MTV (Minimum Thickness Variation, Paschmann et al. 2005). In this review we will mainly discuss the CVA. Method of Building a D-Based Coordinate System Through Definition of Dimensionality: Minimum Directional Derivative (MDD) Analysis Shi et al. (2005) proposed a method directly based on the definition of dimensionality. Since this analysis method is derived from looking for the minimum derivative along various directions, it was named as "Minimum Directional Derivative (or Difference)" analysis, or MDD analysis in short. Note that although other ways of building a D-based Coordinate System are not as straightforward as the MDD method from the definition of the dimension number, they are still very necessary, especially when the estimation of field gradient fails, which happens in many cases. A GUI interface for the MDD method can now be accessed in the SPEDAS. Review of the Analysis Processes First we discuss the dimension number determination for the magnetic field. For other parameters like electric field or flow field the algebraic manipulations are the same. For a 1-D or 2-D structure, if a certain direction n ⇀ is along the invariant direction, i.e. along which all the parameters remain constant, from the definition of dimensionality we mentioned in Sect. 1, it will certainly satisfy that the directional derivative along n ⇀ for all component of the magnetic field is equal to zero, i.e., \(\partial B_{x}/\partial n=0\), \(\partial B _{y}/\partial n=0\) and \(\partial B_{z}/\partial n=0\), where \(x\), \(y\), and \(z\) are the axes of a certain coordinate system such as GSE, and then one finds ( ∂ B ⇀ / ∂ n ) 2 = ( ∂ B x / ∂ n ) 2 + ( ∂ B y / ∂ n ) 2 + ( ∂ B z / ∂ n ) 2 =0. To find the invariant direction n ⇀ , we just need to find the minimum value of ( ∂ B ⇀ / ∂ n ) 2 . Therefore we must first calculate the gradient of the magnetic field. Using the measurements of a multi-spacecraft system with at least four spacecraft, it is not difficult to estimate all nine components of the magnetic gradient tensor G=∇ B ⇀ at every observing moment, using various methods of estimation. For the case of four spacecraft such as Cluster or MMS, linear estimation is appropriate and identical results can be obtained from different methods including least squares methods (Harvey 1998; Chanteur and Harvey 1998), Barycentric method (Chanteur 1998), and Taylor expansion method (Pu et al. 2003), etc. The least squares method can be easily applied when there are more than four points of measurements. Here we briefly introduce the Taylor expansion scheme (Pu et al. 2003) to calculate ∇ B ⇀ , which can be expanded as G=∇ B ⇀ = [ ∂ B x ∂ x ∂ B y ∂ x ∂ B z ∂ x ∂ B x ∂ y ∂ B y ∂ y ∂ B z ∂ y ∂ B x ∂ z ∂ B y ∂ z ∂ B z ∂ z ] Taking the components of the first row of ∇ B ⇀ as an example, the Taylor expansion is accurate to first order in \(\Delta r\cdot \nabla B_{x}\) measured by satellite C1, C2 and C4 in the vicinity of C3 is B x i = B x 3 +Δ r ⇀ i 3 ⋅∇ B x 3 (i=1,2,4) where Δ r ⇀ i 3 = r ⇀ i − r ⇀ 3 represents the position of satellite C\(i\) relative to C3, \(\nabla B_{x3} = (\frac{\partial B_{x3}}{ \partial x},\frac{\partial B_{x3}}{\partial y},\frac{\partial B_{x3}}{ \partial z})\) indicates the \(Bx\) gradient at the C3 position. Since the Bx components and Δ r ⇀ i 3 can be easily obtained from observation, it is then easy to calculate \(\nabla B_{x3}\) by solving the three linear equations (2.2). For a linear approximation, \(\nabla B_{x}\) is identical using different satellites, so we can use \(\nabla B_{x3}\) to represent the \(\nabla B _{x}\) that we need. In the same way, all the components of ∇ B ⇀ can be obtained, which are first-order accurate at C3. Now we turn back to the question of finding the minimum value of ( ∂ B ⇀ / ∂ n ) 2 = ( ∂ B x / ∂ n ) 2 + ( ∂ B y / ∂ n ) 2 + ( ∂ B z / ∂ n ) 2 . The product of n ⇀ and ∇ B ⇀ is D ⇀ = n ⇀ ⋅∇ B ⇀ =∂ B ⇀ /∂n=(∂ B x /∂n,∂ B y /∂n,∂ B z /∂n) Then, given the estimation of matrix \(\nabla \vec{B}\), the invariant axis n ⇀ can be determined by minimization of D 2 = ( ∂ B ⇀ / ∂ n ) 2 = ( ∂ B x / ∂ n ) 2 + ( ∂ B y / ∂ n ) 2 + ( ∂ B z / ∂ n ) 2 , and this minimization is subject to the normalization constraint \(\vert \vec{n} \vert ^{2} - 1 = 0\). In order to solve this problem of conditional extremum, we introduce a Lagrange multiplier \(\lambda \) and seek the solution of three linear equations $$ \left \{ \textstyle\begin{array}{l} \displaystyle\frac{\partial }{\partial n_{x}} \bigl( D^{2} - \lambda \bigl(| \vec{n} |^{2} - 1\bigr) \bigr) = 0 \\ \displaystyle\frac{\partial }{\partial n_{y}} \bigl( D^{2} - \lambda \bigl(| \vec{n} |^{2} - 1\bigr) \bigr) = 0 \\ \displaystyle\frac{\partial }{\partial n_{z}} \bigl( D^{2} - \lambda \bigl(| \vec{n} |^{2} - 1\bigr) \bigr) = 0 \end{array}\displaystyle \right ., $$ where (\(n_{x}\), \(n_{y}\), \(n_{z}\)) are three components of n ⇀ in the original coordinate system in which the magnetic field data are given. Carrying out the differentiations, Eqs. (2.4) become { n ⇀ ⋅ ∇ B → ⋅ ∂ D ⇀ ∂ n x = λ n x n ⇀ ⋅ ∇ B → ⋅ ∂ D ⇀ ∂ n y = λ n y n ⇀ ⋅ ∇ B → ⋅ ∂ D ⇀ ∂ n z = λ n z . Note that the partial derivatives \(\partial /\partial n_{x}\), \(\partial /\partial n_{\mathrm{y}}\), and \(\partial /\partial n_{z}\) in the above equations are applied holding (\(x\), \(y\), \(z\)) constant, hence these equations simplify to { n ⇀ ⋅ ∇ B → ⋅ ∂ B ⇀ ∂ x = λ n x n ⇀ ⋅ ∇ B → ⋅ ∂ B ⇀ ∂ y = λ n y n ⇀ ⋅ ∇ B → ⋅ ∂ B ⇀ ∂ z = λ n z . Finally, these equations have the form of an eigenvalue problem ( L ↔ −λ I ↔ ) n → =0 where L ↔ = G ↔ G ↔ T =(∇ B → ) ( ∇ B → ) T (\(T\) denotes transposition) is a symmetrical matrix. Therefore the eigenvalues of L ↔ are all real and the corresponding eigenvectors are orthogonal. It can be demonstrated (by writing the matrix L ↔ in the eigenvector basis, where the matrix L ↔ is diagonal) that the three eigenvalues \(\lambda _{1}\), \(\lambda _{2}\) and \(\lambda _{3}\) represent the maximum, intermediate and minimum values of \(D^{2}\). The three eigenvectors \(\vec{n}_{1}\), \(\vec{n}_{2}\) and \(\vec{n}_{3}\) thus represent the three directions along which \(D^{2}\) have the maximum, intermediate, and minimum values, which are \(\vert \partial \vec{B} / \partial n_{1} \vert ^{2}\), \(\vert \partial \vec{B} / \partial n_{2} \vert ^{2}\), and \(\vert \partial \vec{B} / \partial n_{3} \vert ^{2}\), respectively. Thus the three eigenvalues can be viewed as the indicators for determining the dimension number of the magnetic structure, since they identify directions along which the spatial gradients are large or small. Generally, we can say that if \(\lambda _{1}\), \(\lambda _{2}\) and \(\lambda _{3}\) are not very far from each other within a structure, we can regard it as a 3-D structure. If \(\lambda _{1}\), \(\lambda _{2}\gg \lambda _{3}\), we can deem it as a quasi-2-D structure with its invariant direction along \(\vec{n}_{3}\), i.e., \(\partial / \partial n_{3}=0\). If \(\lambda _{1}\gg \lambda _{2}\), \(\lambda _{3}\), then it can be regarded as a quasi-1-D structure, with the invariant axes in the plane of \(\vec{n}_{2}\) and \(\vec{n}_{3}\), and the only variant direction is along \(\vec{n}_{1}\). Here in data analysis we briefly summarize the practical steps of dimension number determination and D-based coordinate system setup, containing the following steps, see Fig. 5. First, estimate the field gradient tensor \(G = \nabla \vec{B}\) (\(\vec{B}\) can be replaced by any vector field, e.g. V ⇀ or E ⇀ ) at every moment by multi-point measurements. Second, find the eigenvalues and eigenvectors of a symmetrical matrix \(L = GG^{T} = ( \nabla \vec{B} ) ( \nabla \vec{B} )^{T}\). The three eigenvalues \(\lambda _{\max }\), \(\lambda _{\mathrm{mid}}\), and \(\lambda _{\min }\) represent the maximum, intermediate and minimum values of the field directional derivatives, and the three eigenvectors n ⇀ max , n ⇀ mid and n ⇀ min represent the corresponding directions. Third, based on these calculations we determine the dimensionality and characteristic directions of the structure, as shown in Fig. 5. One special case not mentioned in Fig. 5 is when \(\lambda _{1}\gg \lambda _{2}\gg \lambda _{3}\), it can be 1-D or 2-D depending on one's point of view. Finally, the directions n ⇀ max , n ⇀ mid and n ⇀ min can be used to build a D-based coordinate system. Steps of MDD tool to determine the structure dimension number and principal directions It is worth noting here that n ⇀ and − n ⇀ are the same eigenvector (the same situation also appears in the MVA analysis, see the discussion in Sonnerup and Scheible (1998) when calculating eigenvectors. For an ordered visualization of the results, one way is to set arbitrarily the \(x\) (or \(y\), \(z\)) component of n ⇀ to be positive so that we can get a series of directions which can be compared with each other, and one can also calculate the average direction or check for variations of the structure. Another point we would like to mention here is the attempt of finding the quantitative index of dimension number in order to visualize the effective dimensionality more easily. Rezeau et al. (2018) recently introduced three parameters that may be used as proxies, \(D_{1} = (\lambda _{\max } - \lambda _{\mathrm{mid}} )/\lambda _{\max } \), \(D _{2} = (\lambda _{\mathrm{mid}} - \lambda _{\min } )/\lambda _{ \max } \) and \(D_{3} = \lambda _{\min } /\lambda _{\max } \), which all vary from 0 to 1 and whose sum is 1. When \(\lambda _{\max }\), \(\lambda _{\mathrm{mid}}\ \mbox{and} \lambda _{\min }\) are comparable to each other, one obtains \(D_{1} \approx 0\) and \(D_{2} \approx 0\), while \(D_{3} \approx 1\), indicating a quasi-3-D case. When \(\lambda _{\max } \gg \lambda _{\mathit{mid}}, \lambda _{\min } \), one obtains \(D_{1} \approx 1\), while \(D_{2} \approx 0\) and \(D_{3} \approx 0\), which indicates a quasi-1-D case. When \(\lambda _{\max },\lambda _{\mathrm{mid}}\gg \lambda _{\min } \) one obtains \(D_{1} \approx 0\), \(D_{3} \approx 0\), while \(D_{2} \approx 1\), which indicates a quasi-2-D case. However, the difference between these three cases are not always clear and the three proxies are not always ideal. Considering a flux rope with \(\lambda _{\max } = 5\), \(\lambda _{\mathrm{mid}} = 1\) and \(\lambda _{\min } = 0.1\), for example, we get the dimensionality proxies \(D_{1} = 0.8\), \(D_{2} = 0.18\) and \(D_{3} = 0.02\). The structure can be 1-D but it shows a slightly 2-D character since \(D_{2}\) is not negligible (1-D but much more 2-D than 3-D). The fact that \(D_{1} > D _{2}\) indicates that the tube is strongly flattened in one direction, which shows a transition between 1D (tube flattened) and 2D (circular tube). Such flux rope structures have been shown in Shi et al. (2006) from Cluster data and Tian et al. (2019) from MMS data. Moreover, direct comparison of eigenvalues may overestimate the difference between spatial gradients, since the eigenvalues are actually the square of the spatial gradients along the corresponding eigenvectors. Since \(\sqrt{\lambda } \) is equivalent to the directional derivative with the same units, we can also use \(\sqrt{\lambda } \) instead of \(\lambda \) in the calculations stated above. Tian et al. (2019) have also introduced some other parameters to indicate the dimension number. Denton et al. (2010, 2012) have proposed a modified method and tested it using simulation data, which will be discussed in Sects. 4.1 and 4.4. Rezeau et al. (2018) have also proposed generalized MDD methods, which will be mentioned in Sect. 4.2. Normal of a 1-D Structure and D-Based Coordinate For a 1-D structure, as discussed in Sect. 1, all the parameters vary only in one direction, i.e., the maximum derivative direction, which is also the normal of the structure. Therefore, we can use the MDD analysis to determine the normal of a quasi-1-D discontinuity and then build a D-based coordinate system. For a 1-D case, the maximum derivative direction n ⇀ max from the MDD analysis is along the gradient of the total magnetic field. This can be demonstrated as follows: in the MDD coordinate system, for a 1-D structure, \(\nabla B = (0,0,\partial B/ \partial n_{\max})\) is just along the n ⇀ max direction. In the same way, for 2-D cases one can find that \(\nabla B = (0,\partial B/\partial n_{\mathrm{mid}},\partial B/ \partial n_{\max})\) is in the plane perpendicular to n ⇀ min , not solely along n ⇀ max or n ⇀ mid . Here we perform a simulation in which a cluster of spacecraft moves across a 1-D Harris current sheet (similar to the magnetotail current sheet) modelled as, B → = B x 0 tanh ( z L 0 ) e ⇀ x + B z 0 e ⇀ z , from which we can easily see that the normal of the current sheet is along the \(z\) direction and the physical fields along \(x\) and \(y\) plane do not vary. Note that the variation is still 1-D although the magnetic field components in both \(x\) and \(z\) directions are non-zero. We assume four virtual satellites traverse this model 1-D current sheet and plot the MDD analysis result in Fig. 6. The satellites cross the current sheet from top left to bottom right, as shown in Fig. 6h, where the field lines in the \(xz\) plane are also plotted. The magnetic field components detected by one of the four virtual spacecraft are plotted in the first panel of Fig. 6. From panel 6b and c one can easily find that the results of the analysis indicate a 1-D feature of the structure, a maximum eigenvalue \(\lambda _{\max } \) corresponds to the \(z\) direction, and the other two eigenvalues \(\lambda _{\mathrm{mid}}\ \mbox{and}\ \lambda _{ \min } \) are close to zero. Then the calculated normal direction \(\mathit{Nmax}\) is clearly along \(z\) direction, as set in the model. Unlike the well-determined normal direction, we cannot distinguish \(\mathit{Nmid}\) and \(\mathit{Nmin}\) because they are both invariant directions, and \(\mathit{Nmid}\) and \(\mathit{Nmin}\) can be any orthogonal directions (in the \(x\)–\(y\) plane) perpendicular to \(\mathit{Nmax}\), which is also consistent with the properties of a 1-D structure. The fluctuations in \(\lambda _{\min}\) and \(\lambda _{\mathrm{mid}}\) are as expected in Fig. 6b, and indicate that the variations along the \(\mathit{Nmin}\) and \(\mathit{Nmid}\) are so small that numerical errors are dominant. This is consistent with the configuration of the field and confirms the reliability of the calculation. Small random errors are added to avoid the construction of a singular matrix in the calculation of eigenvalues of the 1-D field gradient, considering the expectation that the \(\lambda _{\mathrm{mid}}\ \mbox{and}\ \lambda _{\min } \) are close to zero. From MDD analysis, these two orthogonal directions in the \(x\)–\(y\) plane are not constant but unstable throughout the current sheet, which means that for a pure 1-D structure the minimum and intermediate directions are not clear, although they are both in the plane perpendicular to the normal. Then for this 1-D structure, the D-based coordinate system has one definite axis, i.e., the normal of the current sheet. If we wish to prevent the other two axes from varying with time, we can set one axis along the magnetic field projected in the \(\mathit{Nmid}\)–\(\mathit{Nmin}\) plane, whose direction is invariant. Another way is to use MVAB or the minimum gradient analysis method discussed in Sect. 2.6 to obtain one definite axis along \(x\). MDD result for four virtual satellites traversing a modeled 1-D current sheet (equation (2.7) with \(L_{0} = 100~\mbox{km}\), \(B _{x0} = 40~\mbox{nT}\), \(B _{z0} = 10~\mbox{nT}\)). (a) Magnetic field observed along the trajectory; (b) square root of eigenvalues \(\lambda _{ \max }\), \(\lambda _{ \mathrm{mid} }\), \(\lambda _{ \mathrm{min} }\) of the matrix \(L\); (c) the Rezeau et al. dimensionality indices of the structure: \(\mathrm{D1} = \frac{\sqrt{\lambda _{\max }} - \sqrt{ \lambda _{\mathrm{mid}}}}{\sqrt{\lambda _{\max }}} \), \(\mathrm{D2} = \frac{\sqrt{ \lambda _{\mathrm{mid}}} - \sqrt{\lambda _{\min }}}{\sqrt{ \lambda _{\mathrm{max} }}} \) and \(\mathrm{D3} = \frac{\sqrt{\lambda _{\mathrm{min} }}}{\sqrt{\lambda _{\mathrm{max} }}} \); (d) maximum derivative direction n ⇀ max ; (e) intermediate derivative direction n ⇀ mid ; (f) minimum derivative direction n ⇀ min ; (g) the calculation quality indicator calculated by two methods, | ∇ ⋅ B ⇀ ∇ × B ⇀ | (Dunlop and Woodward 1998, blue line) and | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) \(( i, j = x/y/z )\) (Olshevsky et al. 2015, red line). (h) The spacecraft (SC) trajectory and the magnetic field lines of the current sheet in the \(x\)–\(z\) plane. The blue/green/red line is the \(x/y/z\) component of the vector in panels a, d, e and f. Random errors on the order of \(10^{-7}\) nT have been added to the background field in order to avoid singularities Unlike the MVA method, which generally obtains results by using a series of data samples during an interval by a single satellite, using the MDD analysis we can obtain the direction at every observed moment using multipoint measurements. Therefore, MDD can in principle see the time variation of the directions. For example, in Fig. 4 of Shi et al. (2005), the maximum directions at some points in the boundary layer (shading area) have some rotations to the mean normal direction, implying that the layer may not be spatially uniform or has some temporal variations. In data analysis, Shi et al. (2005, 2009a, 2009b), Sun et al. (2010) and Yao et al. (2016) have calculated the normal direction using Cluster data, and Yao et al. (2017, 2018) and Rezeau et al. (2018) have applied this approach to MMS data. Invariant Axis and D-Based Coordinate for a 2-D Structure If the observed flux tube is a quasi-2-D structure, we can determine its invariant axis direction using the MDD analysis method, and then we can obtain a D-based coordinate system by determining the invariant axis. Shi et al. (2005) have applied the analysis on a modeled flux rope by Elphic and Russell (1983) and a flux rope from Cluster observations. Denton et al. (2016) and Hasegawa et al. (2017) have applied the analysis on a magnetic reconnection site using MMS data. Here we use a magnetic field model for a 2D flux rope, \(\nabla ^{2}A = e^{ - 2A}\) as Hau and Sonnerup (1999) and Hu and Sonnerup (2002) have used in their benchmark of GS reconstruction, where \(A\) is the out-of-plane component of the magnetic vector potential. This model has an analytical solution for \(A\), given by: $$ A(x,y) = \ln \bigl\{ \alpha \cos x + \sqrt{1 + \alpha ^{2}} \cosh y \bigr\} $$ where (\(\tilde{x}\), \(\tilde{y}\)) is the axis in the plane perpendicular to the invariant axis \(z\). When \(\alpha > 0\), we obtain a 2D flux rope embedded in a current sheet, and when \(\alpha = 0\), it is a 1-D current sheet. Then we perform a simulation in which a cluster of four spacecraft move across this series of flux tubes, see Fig. 7. The separation is set to be 10 km, much smaller than the current sheet width, 400 km. We find that MDD can determine that it is a quasi-2-D structure because \(\lambda _{\max },\lambda _{\mathit{mid}} \gg \lambda _{\min } \) (Fig. 7b), and the average invariant axis of this interval is \((0.001, -0.020, 0.999)\), very close to z which is the axis of each flux rope in the model. These structures are 2-D but close to 1-D (Fig. 7b), because they are flux ropes embedded in a current sheet. Therefore, we can still find approximately the current sheet normal direction which is along \({\sim} N_{\mathrm{max}}\). Examples of flux ropes observed by MMS will be shown in Sect. 3.4.3. Simulated MDD analysis on modeled flux ropes: (a) magnetic field observed along the trajectory; (b) square root of eigenvalues \(\lambda_{\max}\), \(\lambda _{\mathrm{mid}}\), and \(\lambda _{\min}\) of the matrix \(L\); (c) the Rezeau et al. dimensionality indices of the structure \(\mathrm{D1} = \frac{\sqrt{\lambda _{\max }} - \sqrt{\lambda _{\mathrm{mid}}}}{\sqrt{\lambda _{\max }}} \), \(\mathrm{D2} =\frac{\sqrt{\lambda _{\mathrm{mid}}} - \sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}} \), \(\mathrm{D3} = \frac{\sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}} \); (d) maximum derivative direction n ⇀ max ; (e) intermediate derivative direction n ⇀ mid ; (f) minimum derivative direction n ⇀ min ; (g) the calculation quality indicators calculated by two ways, | ∇ ⋅ B ⇀ ∇ × B ⇀ | (blue line) and | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z) (red line). (h) the SC trajectory and the \(B_{z}\) value of the flux rope in the \(x\)–\(y\) plane. The blue/green/red line is the \(x/y/z\) component of the vector in a and d–f. Some fluctuations in \(\lambda _{\min}\) are as expected and indicate that the variation along the \(\mathit{Nmin}\) is so small that numerical errors are dominant. This is consistent with the configuration of the field and confirms the reliability of the calculation Recent studies using MMS data show that a combination of MDD and MVA provides more reasonable estimates of the \(L\)–\(M\)–\(N\) coordinate systems of approximately 2-D current sheets during ongoing reconnection than MDD or MVA only. Here the \(L\) axis is along the direction of the reconnecting magnetic field component, the \(N\) axis is perpendicular to the current sheet, and the \(M\) axis is along the reconnection line (the \(X\) line) in the 2-D model. Denton et al. (2018) developed a hybrid method in which the normal (\(N\)) is estimated as the maximum directional derivative of the magnetic field and the \(L\) axis is along the maximum variance direction of the magnetic field. Small adjustments are necessary to make the \(L\)–\(M\)–\(N\) axes strictly orthogonal to each other (see Appendix of Denton et al. 2018 for details). Genestreti et al. (2018) showed that the best \(L\)–\(M\)–\(N\) coordinate system for a magnetotail reconnection event can be estimated by a combined MDD and MVAVe (minimum variance analysis of the electron velocity) method. In their study, the \(M\) axis is defined to be along the cross product of the \(N\) axis from MDD and the maximum variance direction of the electron velocity (which turns out to be roughly along the \(L\) axis), and the \(L\) axis completes the right-handed orthogonal system. For a 3-D structure, if it is not perfectly isotropic, it can still have maximum, medium and minimum derivative directions. Then we can still get its three principle axis from MDD analysis and a D-based coordinate system can still be built. D-Based Coordinate System for a 2-D Structure Based on MVA of the Magnetic Pressure Gradient As mentioned above it is found that the MVA on the electric current density, i.e., MVAJ is sometimes valid for finding a flux rope invariant axis. Then the D-based coordinate system can be built when studying a flux rope using MVAJ if we can obtain accurate current observations/estimations inside the flux rope. Recently in studying some events from MMS data in the magnetopause, Zhao et al. (2016) found that because both the current and the magnetic field components along the direction of rope axis are not constant, the minimum variance analysis on either of them can not result in an accurate rope axial direction. Therefore, they suggest to perform the minimum variance analysis on the magnetic pressure gradient. The magnetic pressure gradient can be calculated using four satellites data by the way introduced in Sect. 2.4.1. Based on the assumption that the flux rope pressure profile is uniform along the axial direction in the MMS spacecraft spatial separation scale (around 10 km), the pressure gradient acts only perpendicular to the rope axis. Thus minimum variance analysis on the magnetic pressure gradient gives a good estimation of the axial direction of flux ropes using MMS data. In one of the same events, we have performed the MDD analysis (see Fig. 8). The axis direction from MDD is \([-0.336, 0.836, -0.434]\) in GSM coordinates, averaged from (2015-10-16 13:04:29.2 to 2015-10-16 13:04:29.8) and has an angular difference of 2.75 degrees from Zhao's calculation \([-0.319, 0.861, -0.396]\) (their Fig. 3a). Based on the magnetic pressure gradient. Recently, Zhao et al. (2018, private communication) proposed a PQR system, where \(R\) is the rope axial direction determined by the minimum variance of the magnetic pressure gradient, \(Q\) is along the average direction of the flux rope motion in the spacecraft frame, and \(P\) completes the right-hand coordinate system. This coordinate system is particularly convenient for a 2-D flux rope study since the bipolar field signature will be revealed in the \(P\) component and unipolar core field will be revealed in \(R\) component. Also, one can calculate the different forces in the momentum equation to study the physics in a flux rope. MDD analysis on a flux transfer event: (a) magnetic field in GSM coordinate system observed by MMS1 along the trajectory; (b) square root of eigenvalues \(\lambda _{\max}\), \(\lambda _{\mathrm{mid}}\), and \(\lambda_{\min}\) of the matrix \(L\) (dashed horizontal line indicates \(\delta B/l_{\max } \), given measurement error \(\delta B = 0.05\) nT and the largest separation among spacecraft \(l_{\max } \), discussed in Sect. 4.1); (c) minimum derivative direction n ⇀ min ; (d) structure velocity perpendicular to the invariant axis, i.e., in the variant plane, see discussion in Sect. 3.4.3; (e) the calculation quality indicators calculated in two ways, | ∇ ⋅ B ⇀ ∇ × B ⇀ | (blue line), | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z) (red line). The results for the two periods within the two blue boxes have the smaller uncertainties and more stable directions Other single or multi-point methods for building a D-based coordinate system for a flux rope have also been developed. Assuming axial symmetry, Rong et al. (2013) developed a method to obtain the invariant axis of a flux rope. Zhang et al. (2013) and Yang et al. (2014) using the methods of Shen et al. (2007)'s curvature determination methods studied some force free flux ropes. Minimum Gradient Analysis (Local-MVA-Like Method) Using multi-point calculations, we can also obtain a coordinate system similar to MVA at every moment. As has partially been mentioned in Shi et al. (2005), if we can calculate the spatial difference of the field using multipoint measurements, another way to build a coordinate system is to calculate the extremum value of the gradient of \(B_{n}\). Considering the product of \(\nabla \vec{B}\) and \(\vec{n}\) we find that \(\vec{D}'\) is the gradient of \(B_{n}\), i.e., \(\vec{D}' = \nabla \vec{B} \cdot \vec{n} = \nabla B_{n} = ( \partial B_{n} / \partial x,\partial B_{n} / \partial y,\partial B_{n} / \partial z )\). We can also calculate the extremum of \(D^{\prime \,2}\) to see what happens. Following similar algebraic manipulations as used in Sect. 2.4.1, we find that the minimization of \(D^{\prime \,2}\) is equivalent to solving the eigenvalues and eigenvectors of a matrix \(L' = G^{T}G = ( \nabla \vec{B} )^{T} ( \nabla \vec{B} )\). This matrix is also symmetrical and has real eigenvalues and orthogonal eigenvectors. If one eigenvalue of \(L\) is \(\lambda \) and its eigenvector is n ⇀ , i.e., G G T n ⇀ =λ n ⇀ , after multiplying \(G^{T}\) on both sides we get G T G( G T n ⇀ )=λ( G T n ⇀ ). So \(\lambda \) is also an eigenvalue of \(L'\) and the corresponding eigenvector should be G T n ⇀ . Then the matrix \(L' = G^{T}G\) here and \(L = GG^{T}\) in Sect. 2.4 have the same eigenvalues but different eigenvectors. The process of minimization of \(D^{\prime \,2}\) then becomes the minimization of the gradient of \(B_{n}\). Then we may call this approach 'Minimum Gradient Analysis' (MGA). The objective of this method is similar to that of the MVAB method, because MVAB is looking for the minimum variance of \(B_{n}\). If the variance of \(B_{n}\) is minimum, then the gradient of \(B_{n}\) should also be minimum if the magnetic field structure does not vary with time (the stationarity of magnetic field is always true for a 1-D structure as mentioned in Sect. 2.1, and is often valid for 2-D/3-D structures if the motion across the spacecraft is very fast). We can call this analysis a local-MVAB-like analysis for multi-point data (The 'local' means performing MVAB at every moment, which means it is performed at a local small area compared to the traditional MVAB for the whole crossing). Figure 9 shows a simulated result of this kind of calculation for a modeled current sheet given by (2.7). From the point of view of MVA, the maximum direction of \(B_{n}\) should be along \(x\), and one cannot distinguish the medium and minimum directions, and this is just consistent with the result shown in Fig. 9. For this case we may discuss a physical explanation of why the eigenvalues are the same while the eigenvectors are different. We can find that the three eigenvalues in Fig. 9b are exactly the same as those in Fig. 6b when we use the same set of random errors added to the magnetic field. The maximum direction of MDD is along z ⇀ for this current sheet, and then from the discussion above the maximum direction of MGA should be along G T z ⇀ = [ ∂ B x ∂ x ∂ B x ∂ y ∂ B x ∂ z ∂ B y ∂ x ∂ B y ∂ y ∂ B y ∂ z ∂ B z ∂ x ∂ B z ∂ y ∂ B z ∂ z ] z ⇀ = [ 0 0 ∂ B x ∂ z 0 0 0 0 0 0 ] [ 0 0 1 ] = [ ∂ B x ∂ z , 0 , 0 ] ∝ x ⇀ , which is consistent with the calculation in Fig. 9d. For cases with \(B_{y}\) or \(B_{z}\) varying along z ⇀ as shown in Sect. 4.3, the result may be different. Simulated local-MVA-like (MGA) analysis on the modeled current sheet (the same model, parameters, and random errors added as in Fig. 6): (a) magnetic field observed along the trajectory; (b) square root of eigenvalues \(\lambda _{ \max }\), \(\lambda _{ \mathrm{mid} }\), and \(\lambda _{ \mathrm{min} }\) of the matrix \(L'\); (c) maximum derivative direction n ⇀ max ; (d) intermediate derivative direction n ⇀ mid ; (e) minimum derivative direction n ⇀ min ; (f) the calculation quality indicators calculated by two ways, | ∇ ⋅ B ⇀ ∇ × B ⇀ | (blue) and | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z) (red); (g) the SC trajectory and the magnetic field line of the current sheet in the \(x\)–\(y\) plane; The blue/green/red curve is the \(x/y/z\) component of the vector in panels a, d, e and f. From panel d one can note that this method can successfully find the maximum direction as the single satellite MVAB method as shown in Table 1, although it cannot well distinguish \(\mathit{Nmin}\) and \(\mathit{Nmid}\) directions, which is also true for local MVA Using this method, we cannot directly build a D-based coordinate system. However, the MVAB method can help find the \(L\) direction which is difficult for the MDD analysis. Then in some 1-D cases, as described in (2.7) using the combination of MDD and MGA methods we may find different axes for the D-based coordinate system. Note that, if we use the four satellite data at every moment in time to perform the MVA, one can expect the same results with the MGA. D-Based Coordinate System for a 2-D Structure Based on Current Density Measurements The MMS mission has for the first time enabled sufficiently accurate measurements of the electric current density with the plasma instruments (e.g., Eastwood et al. 2016; Phan et al. 2016), and that capability allowed for the development of a new method for the invariant axis orientation of steady, 2-D structures (Hasegawa et al. 2019). The method can be used to estimate the orientation of the X line and flux rope axis from single-spacecraft measurements of the magnetic field and current density. Here we assume that the structure is time independent and 2-D (\({\partial } / {\partial t} =0\), \({\partial } / {\partial z} =0\)) and that the co-moving frame velocity V ⇀ str is known from either of the methods to be discussed in Sect. 3. The \(x\) axis is defined to be antiparallel to the projection of V ⇀ str onto the plane perpendicular to the \(z\) axis, and the \(y\) axis completes the orthogonal system. The \(y\) component of Ampère's law ∇× B ⇀ = μ 0 ( j ⇀ + ε 0 ∂ E ⇀ /∂t) can then be reduced to \(- {\partial B_{z}} / {\partial x} = \mu _{0} j_{y}\). This indicates that we can obtain the \(B_{z}\) values at points along the spacecraft path from integration along \(x\) of the \(y\) component \(j_{y}\) of the current density, which can be measured as j ⇀ =ne ( v ⇀ i − v ⇀ e ) by the state-of-the-art plasma instruments, in addition to direct measurements by the magnetometers. For an accurate orientation of the invariant axis \(\hat{\mathbf{z}}\), \(B_{z}\) from the spatial integration of \(j_{y}\) $$ B_{z,\mathrm{pla}} = B_{z,\mathrm{mag}} ( t=0 ) - \mu _{0} \int j_{y} dx, $$ where dx=− V ⇀ str ⋅ x ˆ dt and \(t=0\) represents the start of the time interval under discussion, should agree with \(B_{z,\mathrm{mag}}\), \(B_{z}\) directly measured by the magnetometers during the corresponding interval. The optimal invariant axis can thus be estimated by minimizing the following residue \(\mathrm{RES} = \sum_{m=1} ^{m=M} ( B_{z, \mathrm{mag}}^{(m)} - B_{z,\mathrm{pla}}^{(m)} ) ^{2}\), where \(M\) is the total number of data points used in the reconstruction. For structures that satisfy the 2-D and steady assumptions, the correlation between the field components \(B_{z, \mathrm{pla}}\) and \(B_{z,\mathrm{mag}}\) along the optimal invariant axis should be sufficiently high. For an MMS magnetotail reconnection event as reported by Torbert et al. (2018) and Genestreti et al. (2018), the correlation coefficient is 0.9525 and the derived invariant axis is only 5 degrees away from the \(M\) axis estimated by the combined MDD-MVAVe method, which suggests that the observed reconnection was roughly 2-D and steady. By use of the coordinate system thus obtained, reasonable magnetic field and electron streamline patterns in and around the electron diffusion region have been reconstructed from the 2-D electron MHD reconstruction (Hasegawa et al. 2019). Frame of Reference Here the frame of reference in which the observer resides is called the observational frame. If we can find a reference frame in which the observed magnetic field does not vary with time, then this is a steady magnetic or an electrostatic structure, and this reference frame (which moves with the magnetic field structure) is often called a 'proper frame' (e.g., Khrabrov and Sonnerup 1998b; Sonnerup et al. 2013). Obviously, in this frame ( ∂ B ⇀ ∂ t ) str =−∇× E ⇀ str =0, where the subscript 'str' indicates the field quantities in the magnetic field structure reference frame. To find this frame in which the curl of the electric field vanishes, one easy way is to let the electric field vanish in a frame. Then we have the deHoffmann–Teller (HT) frame, which can be determined by a single satellite method, as discussed in Sect. 3.1. Other single satellite methods will be discussed in Sect. 3.2, followed by some multi spacecraft methods discussed in later subsections. Frame in Which Electric Field Disappears: deHoffmann–Teller Frame De Hoffmann and Teller (1950) first introduced the HT reference frame in the study of an MHD shock, where the electric field E ⇀ str disappears in this reference frame. Obviously, if E ⇀ str =0, ( ∂ B ⇀ ∂ t ) str =−∇× E ⇀ str =0 must be satisfied. If the HT reference frame exists, then the magnetic field versus time observed by a satellite is only caused by the motion of a quasi static magnetic field structure relative to the satellite. The ultimate goal of the deHoffmann–Teller (HT) analysis is to find the velocity of the HT reference frame V ⇀ HT , using a set of discretely sampled data points in the practical analysis. This generally involves the use of the least squares method to search for the minimum value of residual electric field in the new reference frame. Details can be seen in the review by Khrabrov and Sonnerup (1998b). A very prominent advantage of the HT analysis is that one can find some indicators that may be used to estimate the reliability of the analysis results. In short, one can compare the electric field measured by the satellite and the electric field caused by the motion of the HT frame to determine whether the resulting HT frame is reasonable. If they are very close, the electric field in the HT reference frame should be very close to zero. One specific approach is to draw a scatter plot of electric field components in the satellite frame versus the corresponding components of the field in the HT frame. If the slope of the line of best fit and the correlation coefficient is close to 1, the reliability of the HT should be good. Another way is to calculate the ratio of the mean square of the residual electric field in the HT frame and the mean square of the original electric field in the satellite frame. A measure of the reliability of the HT frame is then given by the reciprocal of this ratio. However, we should be careful when using these indicators. If the derived HT velocity is very high, as expected in the solar wind, the correlation coefficient is naturally high (i.e., the ratio is small). So they are not good proxies of a good HT frame. The correlation coefficient or ratio should be calculated not in the spacecraft frame, but in the frame in which the average plasma flow velocity is zero, as has been done in Hasegawa et al. (AG, 2015). However, the requirement of E ⇀ str =0 is too strict (i.e., it is sufficient for a proper frame, but not necessary). In many cases, such as perpendicular shock (with a cross shock electric potential inside the ramp), some magnetic flux ropes (see discussion by Sonnerup and Hasegawa 2005) and other structures possessing a curl-free electric field in the frame moving with the structure, a proper frame can still exist, but cannot be obtained through the HT analysis. In some structures such as shocks and other discontinuities, there are often some intrinsic electric fields within the layer along the normal, which can affect the quality of the determination of the frame velocity from the direct HT analysis, and may be important for the understanding of physical processes within the layer. When performing HT analysis on these structures, we should manually exclude the data points within the layer to obtain a correct proper frame, see reviews in Khrabrov and Sonnerup (1998a, 1998b, ISSI book) and Paschmann and Sonnerup (2008, ISSI book). In these cases, the use of SH method (Sonnerup and Hasegawa 2005) or STD (Sect. 3.4) may be helpful in finding the proper frame. A revised HT analysis can also find an estimate of the acceleration (Khrabrov and Sonnerup 1998a, 1998b, ISSI book), but this is an average (constant) acceleration over the time-domain of the sampled data. For instantaneous velocity calculations at every time sample (i.e. allowing variable acceleration) one can refer to Sect. 3.3. Proper Frame Obtained from Single Point Data: Minimum Faraday Residue (MFR) and Sonnerup–Hasegawa (SH) Methods by Assuming a Priori the Dimensionality (Dimension Number) of a Structure Since multi-point data sources are currently limited to the Cluster and MMS missions, finding a proper frame from a single satellite when the HT analysis fails is still very useful. Several novel attempts have been made previously. Assuming a 1-D structure, minimum Faraday residue analysis (MFR) (Terasawa et al. 1996; Khrabrov and Sonnerup 1998b) and minimum mass flux residue analysis (MMR) (Sonnerup et al. 2004) have been proposed. For a 1-D structure, Faraday's law requires that the components of the electric field tangential to the layer are constant, and then a least squares method can be performed to obtain the normal and the velocity along the normal. Sonnerup et al. (2006, 2007) have suggested some unified approaches which can be applied to any measured quantity that follows a classical conservation law. See a detailed review in Sonnerup and Teh (2008, ISSI book). For a time invariant structure, it is required that \(\partial B/ \partial t=0\) in the proper frame we are looking for. According to Faraday's law, \(\nabla \times E = -\partial B/\partial t\), so \(\nabla \times E = 0\). Further, if the structure is 2-D, the electric component Ez along the invariant axis should be constant across the structure. Note that the components perpendicular to the invariant axis are not necessarily zero, so a HT frame may not exist. Sonnerup and Hasegawa (2005) have proposed a scheme (hereafter referred to as the SH method) to derive the direction along which the electric field component has minimum variance. By this method, the orientation of the invariant direction and the velocity components of the structure perpendicular to the invariant direction can be obtained. For structures of magnetic flux rope type, the SH method can give satisfactory results, consistent with estimates from other methods, e.g., from a multi-spacecraft method based on G–S reconstruction (Hasegawa et al. 2006). However, other attempts show that the SH method does not work for most observed as well as numerically simulated reconnection events (Teh and Sonnerup 2008; Denton et al. 2010, 2012). Sonnerup et al. (2013) theoretically discussed the reasons for such shortcomings, and made clear that a significant, non-removable, non-uniform electric field in the plane transverse to the invariant direction is required for the method to work properly. It is also found that the results are sensitive to deviations from strict two-dimensionality and time stationarity. If we can combine the MDD and MFR/SH methods for multi-point data analysis, we may obtain more reliable results. For example, we can use the MDD method to find a structure close to 1-D, and then use MFR to get the normal direction and velocity. Triangulation for 1-D/2-D Structures: Four Spacecraft Timing Here is another method to find the normal of a 1-D structure, and then build a D-based coordinate system and reference frame. Burlaga and Chao (1971) and Russell et al. (1983) developed and used the Triangulation method, also named Timing method, to study interplanetary discontinuities. It is used for a planar structure crossed by at least four spacecraft. A planar structure is actually a 1-D structure, in which all field quantities vary only in one direction, i.e., its normal direction. If a structure has a finite thickness rather than lying in a plane, the timing method is still valid as long as the structure is one-dimensional. So the traversal of a 1-D structure is the basic assumption of Triangulation method. The original Triangulation method also assumes that the velocity of the 1-D boundary does not vary during the crossing of all spacecraft, and then it is also called the 'Constant Velocity Approach' or CVA. If one assumes that the velocity can be changed but the boundary thickness is constant, the approach can be modified to a 'Constant Thickness Approach' or CTA, which is summarized in Haaland et al. (2004) and Sonnerup and Teh (2008). Here we only review the CVA scheme for four-satellite crossings. Suppose a plane or 1-D structure moves across four satellites, where we know the positions of each satellite Δ r ⇀ i j (\(i\), \(j=1\), 2, 3, \(i \neq j\)) and the traversing time difference between each pair of the satellites \(\Delta t_{ij}\) (\(i\), \(j=1\), 2, 3, \(i \neq j\)). We thus obtain vΔ t i j =Δ r ⇀ i j ⋅ n ⇀ , where \(\vec{n}\) is the normal direction, and \(v\) is the velocity magnitude. Then we get three linear equations plus | n ⇀ | 2 =1, and the solution of n ⇀ and \(v\) are obtained by solving these four linear equations. In addition to the 1-D assumption, the structure must be quasi-static, such that when the structure is crossed by all satellites, its normal direction does not change during the interval. Recently Plaschke et al. (2017) performed a time-varying Timing velocity determination, using 3 s long sliding intervals of high time resolution data from four MMS satellites, by computing the cross-correlation functions of each spacecraft pair to obtain \(\Delta t_{ij}\). Knetter (2005), Xiao et al. (2015), and Yao et al. (2016, 2017, 2018) further considered the uncertainties of such a calculation. In order to use the Timing method in two-dimensional cases, Zhou et al. (2006a) proposed a Multiple Triangulation Analysis method, hereafter referred to as the MTA method. If the structure is 2-D, we can use timing analysis for a series of magnetic field contour surfaces and obtain a series of velocities and directions, as shown in Fig. 10. The direction which is perpendicular to the plane, identified by the minimum variance of the series of normal vectors is the invariant direction of the two-dimensional structure. We can use a mathematical method similar to that used in the MVA and MDD analysis to get the direction, i.e., the invariant direction of this 2-D structure. Then a D-based coordinate system can also be built through MTA. From a case study, they found that the directions calculated by the MTA method and the MDD method are the same for a quasi-2-D flux tube (see Zhou et al. 2006a). In principle, the MTA approach should also have the capability to determine the dimensionality of a given structure, since the distribution of the normal directions can be characterized by three eigenvalues of the MTA matrix (\(\lambda _{\max } \), \(\lambda _{\mathrm{mid}}\ \mbox{and}\ \lambda _{ \min } \)). In case \(\lambda _{\min } \) is much smaller than the other two (which means that the series of normal directions are nearly coplanar, see Fig. 4a of Zhou et al. 2006b), the structure can be treated as 2-D and the eigenvector with the minimum eigenvalue represents the axis of the 2-D structure. If \(\lambda _{\max } \) is much larger than the other two, the series of normal directions is aligned with the eigenvector with the largest eigenvalue; this is the normal direction of the 1-D structure. For 3-D structures, the three eigenvalues are not well separated. For very large separations of four satellites, when MDD is not valid because the field gradient calculation is no longer accurate, it still has the ability to give a normal for a 1-D structure. In 2-D cases, however, the MTA approach fails if the spacecraft separation is comparable to or larger than the scale size of the 2-D structure. This method does not require cylindrical symmetry assumption (although a cylindrical flux rope is used in Fig. 10 as an example), and the magnetic field vectors do not need to lie in a plane. Schematic view of MTA method of a 2-D flux rope (from Zhou et al. 2006b), showing the four satellite constellation (here they assume Cluster) passing through a flux rope. Using the Triangulation Method, the normal directions of each contour plane can be obtained, none of which have \(z\) components. The set of magnetic contour planes are represented by the dashed circles and the normal directions of these planes are shown by solid arrows. Thus, the cross product of each pair of directions for each contour plane should point to the flux rope axis Since the traditional Timing method is only applicable to the 1-D case and the MTA method can be used in 1-D or 2-D cases, one may use the MDD analysis (when the satellite separation is small enough so that the gradient calculation is valid) to determine the structure dimension number and then perform the traditional timing or MTA methods. For example, when calculating the velocity of magnetic peaks, Yao et al. (2018) first determined the dimension number of a structure with MDD analysis, found its boundary can be deemed as 1-D, and then used traditional timing methods to calculate the normal direction and propagation velocity. The velocity can also be obtained by the method introduced in Sect. 3.4, and the results from different methods can confirm each other. The timing velocities are calculated by each magnetic field contour surface. Therefore, according to these series of timing velocities and directions, one can obtain the velocity perpendicular to the axis using similar mathematical procedures to those used in MVA and MDD. From some examples, Zhou et al. (2006b) found that there is little difference between the results of the MTA and STD method introduced in Sect. 3.4 for a 2-D structure. Proper Frame for a (Quasi-) Stationary Structure: Spatio-Temporal Difference (STD) Frame from Multi-point Data As mentioned in Sect. 3.1, when performing the HT analysis, we require the electric field to vanish in the frame we want to find, which is too strict because in some cases the electric field does not vanish but we only find that the curl of electric field disappears in a proper frame where ( ∂ B ⇀ ∂ t ) str =0. In addition, the traditional timing method assumes a 1-D structures—i.e. the dimension number of a structure is assumed before performing the analysis. Trying to avoid these problems, Shi et al. (2006) developed a method of velocity calculation (or frame determination) for any structure dimension number, known as the "Spatiotemporal Difference" (STD) analysis of the magnetic field. A GUI interface for the STD method can now be accessed in the SPEDAS. Introduction to the Analysis If the structure to be analyzed does not change significantly during the interval over which the satellite system moves across it (in other words, the time scale of the structure motion is small compared with the structure variation time), it is a quasi-stationary structure. So, in the reference frame of the structure we have \(\frac{\partial \phi }{\partial t} \big\vert _{\mathrm{str}}\sim 0\). Then from (1.1) we get ∂ ϕ ∂ t | sc =− v ⇀ str ⋅∇ϕ, where the observation frame 'obs' is the spacecraft frame, here referred to as 'sc'. Equation (3.2) means that the temporal change of the magnetic field measured by the spacecraft is only caused by the non-uniform property of the structure. In space observation, we have measurements of various parameters, such as moments (including density, temperature, and velocity) and vector fields (electric field and magnetic field, each of which contain three components). In principle we just need to pick any three of these quantities (each component of a vector field can be used as one field quantity) to replace \(\phi \) in (3.2) and obtain three linear equations. The calculation of \(\nabla \phi \) needs data from at least four spacecraft (see Sect. 2.4.1 in detail). By utilizing the finite difference approximation, \(\frac{\partial \phi }{\partial t} \big \vert _{\mathrm{sc}}\) at every moment can be calculated using data from one spacecraft or mean values from multi-spacecraft data. Then we can obtain the three components of the vector v ⇀ str by solving the three linear equations. The three magnetic field components are recommended in this calculation due to their higher time resolution, smaller measurement error, and easier accessibility. Thus we will take magnetic data as an example to introduce the method in detail. Using magnetic field, (3.2) can be written as $$ \frac{\partial \vec{B}}{\partial t} \bigg\vert _{\mathrm{sc}} + \vec{V} _{\mathrm{str}} \cdot \nabla \vec{B} = 0 $$ The first term on the left is the temporal variation caused by the motion through spatial gradients of the magnetic field, and \(\vec{V}_{\mathrm{str}}\) (to be determined) is the velocity of the structure relative to the observer, that is, the spacecraft. Equation (3.3) means that the observed temporal change of the magnetic field by the spacecraft is only caused by the motion of the structure. The main idea of this method is to solve the difference equations of (3.3) at every observed point: \(\partial \vec{B}/\partial t \big\vert _{\mathrm{sc}}\) can be estimated by calculating the magnetic field time difference observed by the spacecraft at the observation time series resolution; matrix \(\nabla \vec{B}\) can be estimated by many multi-point methods mentioned in Sect. 2.4.1. Here it is worth noting that when calculating \(\partial \vec{B}/\partial t\vert _{\mathrm{sc}}\) we can obtain the result with second order accuracy by using the central finite difference \((\vec{B}_{i+1}-\vec{B}_{i-1})/ (t_{i+1}-t_{i-1})= (\vec{B}_{i+1}-\vec{B}_{i-1})/2 \Delta t \). If we use the mean value measured by the four satellites, that is a linear interpolation of the measured magnetic field to the barycenter of the tetrahedron, as has been demonstrated in Harvey (1998). The time step length \(\Delta t\) should be set according to the characteristic length of the observed structure, neither too short nor too long. If the step length \(\Delta t\) is too short, the measurement data of the magnetic field may often have a lot of short time disturbances which will influence the calculation of \(\partial \vec{B}/\partial t \vert _{\mathrm{sc}}\) and then the velocity of structure. If \(\Delta t\) is too long, the accuracy of the difference will be poor. Empirically, we suggest that \(\Delta t\) can be taken as \({\sim} 1/10\) of the characteristic time scale of the structure. For example, when a current sheet crossing takes ∼1 min, the \(\Delta t\) can be taken as ∼6 seconds. For a 3-D structure, the calculation is straightforward. The three components of \(\vec{V}_{\mathrm{str}}\) can be directly calculated by solving (3.2), expanded as three linear equations with three unknowns. However, for 1-D or 2-D structures, there must be at least one direction n ⇀ satisfying \(\partial /\partial n \sim 0\) and then the determination of the magnetic gradient tensor has \(\det (\nabla \vec{B})\sim 0\). This is the reason why directly solving (3.2) will produce inaccurate solutions which may result in apparently turbulent velocity components. This can be shown in Figures in Sect. 3.4.3 and 3.4.4 for 2-D and 1-D structures: the inaccurate solution along the invariant direction will be distributed in all the three components of the velocity data in the GSE coordinate system and makes all the three components contain large uncertainties. Therefore, we expect that from the magnetic field data, we can only obtain a reliable velocity determination along one direction for a 1-D structure, and along two directions for a 2-D structure. To solve this problem, we need to use the MDD method to find a structure's dimension number and its characteristic (principal) directions, using multi-point magnetic field measurements, as introduced in Sect. 2.4. Once the structure's dimension number and the three principal directions are determined, we can solve the problem in the MDD eigenvector-based coordinate system, i.e., the D-based coordinate system. Or (3.3) can be transformed to be ∂ B → /∂t | sc T ↔ r T ↔ r T ( ∇ B → ) T T ↔ r =− V → str T ↔ r T ↔ r T (∇ B → ) ( ∇ B → ) T T ↔ r , where T ↔ r ={ n → 1 , n → 2 , n → 3 } (here ',' means different rows) is the transformation matrix from the original coordinate system (e.g., GSE) to the MDD eigenvector-based coordinate system. We get ∂ B → ∂ t | sc , MDD ⋅ ( ∇ B → ) T | MDD =− V → str | MDD ⋅ Λ ↔ where from Sect. 2.4, we find that Λ ↔ = T ↔ r T (∇ B → ) ( ∇ B → ) T T ↔ r is a diagonal matrix, of which the diagonal terms are the three eigenvalues \(\lambda _{\max } \), \(\lambda _{\mathrm{mid}} \) and \(\lambda _{\min } \) of the \(L\) matrix introduced above. Here V → str | MDD = V → str ⋅ T ↔ r is the velocity vector in the basis formed from eigenvectors. Then we can solve the linear equations (3.4) one by one if the corresponding eigenvalue is significant. That is, for a 1-D structure, we just solve the first equation corresponding to the largest eigenvalue \(\lambda _{\max } \) and then get the velocity along its direction of variation, i.e., its normal; while for a 2-D structure, we only solve the first two equations related to \(\lambda _{\max } \) and \(\lambda _{\mathrm{mid}} \), and obtain the velocity components along the maximum and intermediate derivative directions. An alternative way is to first calculate the three components of the velocity by solving the difference equations of (3.3), and then project the result to the three directions (\(\lambda _{\max } \), \(\lambda _{\mathrm{mid}} \) and \(\lambda _{\min } \)). The velocities along the maximum direction (for a 1-D structure) or maximum and intermediate directions (for a 2-D structure) can have a relatively reliable accuracy, but the other direction(s) will not. Then from figures in Sect. 3.4.3 and 3.4.4, we find that the velocities along \(\mathit{Nmax}\) and \(\mathit{Nmid}\) are no longer turbulent and the only turbulent velocity is along \(\mathit{Nmin}\) for a 2-D structure, and for a 1-D structure the velocity along \(\mathit{Nmax}\) is no longer turbulent and the turbulent velocities are along \(\mathit{Nmin}\) and \(\mathit{Nmid}\). Since generally the variations are not exactly 1-D or 2-D, the corresponding eigenvalues are not exactly zero (then the \(\mathrm{det}(\nabla \vec{B} )\) is not strictly equal to zero). This point has been shown to be the case over many years of data analysis (see e.g., Shi et al. 2006, 2009a, 2009b, 2013; Sun et al. 2010). In benchmark calculations for a pure 1-D or 2-D case when some eigenvalues are zero in a certain direction, it is found that the calculation seldom overflows because of the limited digits a computer can deal with and there will be a very small deviation from pure 1-D and 2-D. Even when we get strict zero eigenvalues at some calculating points, by adding very small random perturbations (e.g., \(10^{-5}\) of the original values) to the original field, one can still get a very accurate dimension numbers and directions. This is good example that observational errors can play a positive role. This is similar to the positive effect that numerical errors can play in the numerical simulation of the fluids, where dissipation provided by the accumulated error can stabilize the numerical scheme. The above two solutions are intrinsically identical in the analysis of real data: the transformation to the MDD coordinate system before (i.e., in the 1st way) or after (i.e., in the 2nd way) the calculation can give same results. In practice, we can use both of them and cross-check each other. Here we summarize the practical steps needed to perform the STD analysis on actual data, as illustrated in Fig. 11: Practical steps needed to perform STD analysis on actual data 1. The MDD analysis is carried out to obtain the dimensionality (dimension number) of the structure. 2. We solve the problem by a method depending on the structure dimension number. For a 3-D structure we can calculate three components of the velocity vector after estimating the magnetic gradient tensor G at every moment and the time variation of the magnetic field, \(\partial \vec{B}/\partial t \vert _{\mathrm{sc}}\); for a 2-D (1-D) structure, we can solve (3.3) in the original coordinate system (e.g., in GSE) and then project the velocity vector onto the eigenvectors calculated from the MDD method, or calculate the velocity along two or one directions (by solving (3.4)) in the coordinate system determined by the eigenvectors of the MDD analysis. We emphasize here that the setting of time step length \(\Delta t\) is sometimes very important, and it should be set according to the characteristic length of the observed structure, neither too short nor too long, as discussed above. 3. Finally, the velocity can be obtained along the normal direction for a 1-D structure, or along the direction perpendicular to the invariant axis for a 2-D structure. Application to a 3-D Structure Here we perform the STD calculation for the example of a dipole field. The equations to describe its magnetic field are, $$ \left \{ \textstyle\begin{array}{l} B_{0} = 3.12 \times 10^{4}~\mbox{nT} \qquad R = 1~\mbox{km} \qquad r = \sqrt{x^{2} + y^{2} + z^{2}} \\ \phi = \arccos (x/\sqrt{x^{2} + y^{2}} )\quad \quad \theta = \arccos (z/r) \\ B_{x} = - \frac{3}{2}B_{0}(R/r)^{3}\sin 2\theta \sin \phi \\ B_{y} = - \frac{3}{2}B_{0}(R/ r)^{3}\sin 2\theta \cos \phi \\ B_{z} = - B_{0}(R/r)^{3}(1- 3\cos ^{2}\theta ) \end{array}\displaystyle \right .. $$ For a bar magnet or a simple dipole field, \(\partial \vec{B}/ \partial t \vert _{\mathrm{str}} = 0\) is satisfied strictly. Suppose that it moves along one direction at a velocity of \([140, -160, -120]\) in an arbitrary coordinate system, as seen in Fig. 12. We then place four virtual spacecraft within the magnetic field structure and move the structure with respect to the satellites in order to produce the satellite time-series data. From MDD we find that it is 3-D and then all three velocity components can be calculated, giving in this case \([135.35, -163.39, -123.98]\) m/s. We find that the result is very accurate. Similar results can be expected for quadruple and higher order magnetic fields. We then expect that for any magnetic field derived from a scalar potential, which therefore has no in situ current (and can be a superposition of dipole, quadruple and higher order magnetic fields) the translation velocity can be calculated by the STD method. STD results for a modeled 3-D dipole field (model given in (3.5)). (a) Magnetic field observed along a virtual satellite trajectory; (b) square root of eigenvalues \(\lambda_{\max}\), \(\lambda_{\mathrm{mid}}\), and \(\lambda_{\min}\) of the matrix \(L\); (c) velocity along the maximum derivative direction n ⇀ max ; (d) velocity along the intermediate derivative direction n ⇀ mid ; (e) velocity along the minimum derivative direction n ⇀ min ; (f) velocity of the structure in 3-D For most cases in the space, the fields contain contributions from both in-situ current generated fields and the fields generated by remote currents (i.e., the scalar potential magnetic field). Then the STD method can still be directly performed as long as it is a 3-D structure, provided that the spacecraft separation is small enough to resolve the spatial scale of the structure. Shi et al. (2006) investigated a possible 3-D structure near the polar cusp region using Cluster data, as shown in Fig. 13. Wendel and Adrian (2013) using this method gave the velocity of a structure and found the results are consistent with that obtained from the superposed epoch approach (Fig. 14). In the superposed epoch approach, the authors used a linear approximation to produce a superposed epoch snapshot of the magnetic field structure around the null at every time moment, which provides instantaneous positions of the null with respect to the spacecraft and therefore obtains the velocity of the null. STD calculation for a 3-D structure (from Shi et al. 2006) Results comparison from superposed epoch and STD method, adopted from Wendel and Adrian (2013) If the three eigenvectors are stable during the crossing of a localized magnetic structure, we can use the three MDD directions to build a new coordinate system which can represent the principle directions of the structure and might also provide some help to analyze the structure. Here we use the sequence of 2-D flux ropes described in Sect. 2.4.3, assuming that it moves along one direction at a velocity of \([-4000, -420, 40]\) in an arbitrary coordinate system. In this case the velocity perpendicular to the invariant axis is along \((0.707, 0.707, 0)\). We then put four virtual satellites at one position and let the structures pass by in order to produce the satellite time-series data, from which we calculate the structure velocity. From the MDD calculation we find that it is 2-D, and only two velocity components can be calculated. The velocity perpendicular to the invariant axis is then \([-3887.80, -420.65, -0.71]\) when we calculate the mean values over the whole interval in Fig. 15. However, the velocity along the invariant axis can be arbitrary. STD results for the modeled 2-D flux ropes (model given by (2.8) with \(a=0.225\)). (a) Magnetic field observed along the trajectory; (b) square root of eigenvalues \(\lambda_{\max}\), \(\lambda_{\mathrm{mid}}\), and \(\lambda_{\min}\) of the matrix \(L\); (c) velocity along the maximum derivative direction n ⇀ max ; (d) velocity along the intermediate derivative direction n ⇀ mid ; (e) velocity along the minimum derivative direction n ⇀ min ; (f) calculated velocity components only perpendicular to the invariant axis, i.e., in the variant plane For a 2-D case, the velocity can be calculated only perpendicular to the axis (invariant axis). Typically we find that \(V_{ \mathrm{min} }\) fluctuates, and \(V_{ \mathrm{max} }\) or \(V_{ \mathrm{mid} }\) are often not stable themselves (mostly because the maximum and minimum direction are not stable), but \(V_{\mathrm{2D}}\) which is a vector composed of \(V_{ \mathrm{max} }\) and \(V_{ \mathrm{mid} }\) is stable and represents the motion perpendicular to the axis. For the large scale flux rope event shown in Sect. 2.5, we can calculate the velocity of the structure perpendicular to the axis, i.e., velocity in the variant plane, as shown in Fig. 8d. From the results we can see the velocity as a function of time, and then the acceleration can be derived accordingly. The average velocity for the leading edge is \(({-}47.613\ {-}88.763\ {-}109.121)\) km/s, and that for the trailing edge is \(({-}22.762\ {-}26.366\ {-}33.429)\) km/s. The inconsistent velocities of the two edges suggest that the flux rope is expanding. In Fig. 16 we show an STD calculation using MMS data for a small scale magnetosheath flux rope event in GSE coordinates (Yao et al. 2019). The calculation in the central part of the structure (shown in blue shaded area in Fig. 16) shows that the small scale flux rope is 2-D (Fig. 16b) and the velocity can be obtained perpendicular to the flux rope axis (Fig. 16f). In the core of the flux rope (∼14:07:56.35–14:07:56.45 UT) the structure is even more 2-D than in the outer parts if we look at the eigenvalues in Fig. 16b. Over the duration indicated by the shaded area the calculation quality indicators are well below 0.4 (Fig. 16g), which suggests the linear assumption is valid. Vstr_2D (Fig. 16f) is the resultant velocity of \(V_{\max}\) (Fig. 16d) and \(V_{\mathrm{mid}}\) (Fig. 16e). The axis direction appears to be very stable (Fig. 16c), and the velocity varies little (Fig. 16f), which means that the flux rope moved at a roughly constant velocity. Another 2-D example for the magnetotail current sheet can be found in Shi et al. (2006). STD analysis on a flux rope event: (a) GSM Bx observed by MMS1-4 along the trajectory; (b) square root of eigenvalues \(\lambda _{\max}\), \(\lambda_{\mathrm{mid}}\), and \(\lambda_{\min}\) of the matrix \(L\) (dashed horizontal line indicates \(\delta B/l_{\max } \), given measurement error \(\delta B = 0.05\) nT and the largest separation among spacecraft \(l_{\max } \), discussion in Sect. 4.1); (c) minimum derivative direction n ⇀ min ; (d) velocity along the maximum derivative direction n ⇀ max ; (e) velocity along the intermediate derivative direction n ⇀ mid ; (f) velocity of 2D structure (\(V_{\max}\) and \(V_{\mathrm{mid}}\) combined); (g) the calculation quality indicators calculated by two ways, which shows the quality for linear assumption, | ∇ ⋅ B ⇀ ∇ × B ⇀ | (blue line), | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z) (red line). Blue shaded region marks the interval when the SC crosse the flux rope Here we use the same 1-D current sheet as in Sect. 2.4.2. We assume that it moves along one direction at a velocity of \([1200, 10, 125]\) in an arbitrary coordinate system. In this case the velocity component only along the variant (\(z\)) axis can be well estimated. We then put four virtual satellites at one position and let the structures pass by in order to produce the satellite time-series data, from which we calculate the structure velocity. From MDD we find that it is 1-D and only one velocity component can be calculated. The velocity along the variant axis can be calculated, which turns out to be \([0.12, 0.30, 124.89]\) when we calculate the mean values during the crossing, as shown in Fig. 17. The velocity perpendicular to the normal can be arbitrary. Figure 18 shows a calculation during a magnetopause crossing event (Russell et al. 2017) observed by MMS at the dusk flank of the magnetosphere. The normal direction during the time indicated by the shaded area is stable, while slight variations in the velocity may indicate some acceleration of the current sheet. For a 1-D case, the velocity can be calculated only along the normal (variant axis). The velocity of the magnetopause along the maximum derivative direction n ⇀ max , calculated by the STD method using different \(\Delta t\) ranging from 0.3 s (not shown) to 2 s (here) are quite similar. STD results for a modeled 1-D current sheet (same model and parameters with Fig. 6): (a) magnetic field observed along the trajectory; (b) square root of eigenvalues \(\lambda_{\max}\), \(\lambda _{\mathrm{mid}}\), and \(\lambda _{\min}\) of the matrix \(L\); (c) velocity of the current sheet; (d) velocity along the maximum derivative direction n ⇀ max ; (e) velocity along the intermediate derivative direction n ⇀ mid ; (f) velocity along the minimum derivative direction n ⇀ min ; \(10^{-7}\) nT has been added to the background field in order to avoid some singularities when calculating eigenvalues. Note that for this pure 1-D structure the velocity is only valid for the maximum direction, i.e., only the \(\mathit{Vmax}\) is reliable, which is the reason why \(\mathit{Vmid}\) and \(\mathit{Vmin}\) appear to be more turbulent MDD and STD analysis on a magnetopause crossing event (Russell et al. 2017) observed by MMS at the dusk flank of the magnetosphere: (a) GSM Bz observed by MMS1-4 along the trajectory; (b) square root of eigenvalues \(\lambda _{\max}\), \(\lambda _{\mathrm{mid}}\), and \(\lambda _{\mathrm{min}}\) of the matrix \(L\); (dashed horizontal line indicates \(\delta B/l_{\max } \), given measurement error \(\delta B = 0.05\) nT and the largest separation among spacecraft \(l_{\max } \), discussions in Sect. 4.1). (c) The Rezeau et al. dimensionality indices of the structure \(D1 = \frac{\sqrt{\lambda _{\max }} - \sqrt{\lambda _{\mathrm{mid}}}}{\sqrt{\lambda _{\max }}} \), \(D2 = \frac{\sqrt{\lambda _{\mathrm{mid}}} - \sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}} \), \(D3 = \frac{\sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}} \); (d) maximum derivative direction n ⇀ max ; (e) velocity of the magnetopause along the maximum derivative direction n ⇀ max ; (f) the calculation quality indicators calculated by two ways, | ∇ ⋅ B ⇀ ∇ × B ⇀ | (blue line), | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z) (red line) Application to Determine a Pass-Through Spatial Structure and Enter/Retreat Structure Shi et al. (2009a) have summarized the procedures on how to distinguish the spatial "pass through" effects (field change due to one or more structures passing through or being passed by the spacecraft) from "enter/retreat" effects (the field change is caused by a structure passing the spacecraft in one direction and then moving back over the spacecraft) empirically from field profiles measured by more than one spacecraft. This is very useful because it can help us make a quick judgement concerning these two effects just by observing the relative profiles of different satellites. If we find "interlaced" profiles of the form shown in Fig. 19e, it should be a spatial "pass through''' structure (for example, a surface wave with finite amplitude). If we observe "nested" field profiles (Fig. 19d), it can be interpreted as either an "enter/retreat" or spatial "pass through" effect, depending on the relative position between the spacecraft and the structure/boundary. If we have four identical spacecraft forming a tetrahedron, the possibility of distinguishing the two effects can be greatly enhanced. Illustration of pass-through spatial structure and enter/retreat structure observed by two satellites, adopted from Shi et al. (2009a). (a) "Enter/retreat" effect: the spacecraft entering one region and then moving back, crossing the same boundary; (b) spatial "pass through" effect: the two spacecraft passing through a structure along one line; (c) spatial "pass through" effect: the two spacecraft passing through a structure through different parts of it; (d) the "nested" field profiles measured by the two spacecraft; and (e) the "interlaced" field profiles Having at least four spacecraft measurements allows us to quantitatively distinguish these two effects by calculating the boundary motion velocity and direction, either using the timing method or STD method. Shi et al. (2009a) have applied these techniques to a magnetic hole in the cusp detected by Cluster. From Fig. 20a, we can see that the total magnetic field shows "interlaced" profiles, e.g., for SC3 and SC4, which suggests a spatial structure being traversed during this time interval. Then the calculation of the boundary velocity quantitatively shows us a 'pass through' structure, because the leading and trailing boundaries move along almost the same direction as is evident from Fig. 20c. A pass-through spatial structure example, a magnetic hole in the cusp. From Shi et al. (2009a). (a) total magnetic field observed by the four spacecraft; (b) MDD eigenvalues (magnetic field variations) along minimum, intermediate, and maximum derivative directions; (c) maximum derivative direction, here identical to the 1-D boundary velocity direction; and (d) speed along n1 at every moment. The shaded regions are LB, leading boundary and FB, following boundary/trailing boundary. Since the sign of the eigenvector n1 is arbitrary in the MDD calculation (see the text), here we set it along the velocity direction. (e) and (f) Illustration of the Cluster tetrahedron configuration, and the boundary surface, normal direction and velocity in \(\mathit{XY}\) and \(\mathit{XZ}\) plane in GSE, respectively. The arrows indicate the boundary velocities. The magnetic hole is between these two boundaries. Note that in this case the tetrahedron is magnified for the reader's convenience Now we show an example of an enter/retreat structure for the boundaries of the cusp. From Fig. 21b we can see that these two boundaries are roughly 1-D (the intervals of rapid change in the time series indicate a boundary crossing); so we can only obtain the velocity along one direction in which the field has maximum variations, the normal. The velocity along the normal is shown in Fig. 21c. The valid results are in the two shaded intervals where all four spacecraft are in the same structure. During the traversal of each satellite for the two times in Fig. 21, we find that the velocities are stable for each of the shaded intervals when the spacecraft entered and exited the cusp. Thus, one can build a reference frame with a nearly constant velocity for each of the traversals. The mean speed of the first crossing is 21.0 km/s along \((-0.417, -0.276, -0.866)\) in GSM, while that of the velocity of the second boundary is 15.9 km/s, directing to \((0.047, 0.209, 0.977)\) in GSM, which is opposite to that of the first one, indicating an enter/retreat of the cusp. From this figure one may also find that STD may see the instantaneous velocity of the boundaries. Enter/retreat structure example, from Shi et al. (2009b). (a) Total magnetic field observed by the four spacecraft; (b) magnetic field variations along minimum, intermediate, and maximum derivative directions; and (c) boundary velocity along the normal in GSM at every moment. (d) Schematic illustration of the crossing of high-latitude boundaries Uncertainties and Cautions Concerning Various Analysis Methods The uncertainties of MVA have been discussed by Sonnerup and Scheible (1998). Errors for the Timing method have been discussed by various authors (e.g., Zhou et al. 2009; Vogt et al. 2011; Xiao et al. 2015; Plaschke et al. 2017). Here we mainly discuss the uncertainties and cautions for the MDD and STD analysis. Since these two methods are based on the estimation of gradient of the magnetic field G=∇ B ⇀ , just as the current estimation method does, a simple guideline is that whenever the current calculation is accurate, the MDD calculation should be accurate. The accuracy of the STD method depends not only on the accuracy of G=∇ B ⇀ estimation, but also on two other factors: the accuracy of calculating the \(( \frac{\partial \vec{B}}{\partial t} ) _{\mathrm{sc}}\) and the accuracy of the steady state assumptions \(( \frac{\partial \vec{B}}{\partial t} )_{\mathrm{str}} = 0\). Robert et al. (1998) have analyzed the current calculation uncertainties in detail. They also proposed an empirical method to measure the magnitude of the current calculation error (or the calculation quality), that is, if the magnitude of \(|\nabla \cdot \vec{B}/\nabla \times \vec{B}|\) is greater than a certain value (like 0.4), we deem the current calculation error too large and not credible. We can also use an error indicator in an MDD plot as in Fig. 6 for the building of a D-based coordinate system. Considering that sometimes the magnetic gradient is large but the current is small (e.g., in the center of a magnetic hole or a magnetic mirror (Tian et al. 2019; Yao et al. 2016, 2017; Shi et al. 2009a)), we also use another index, i.e., | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z), where \(\max (|\frac{ \partial B_{i}}{\partial j}|)\) is the maximum absolute value of all components of \(G= \partial \vec{B}\). Here we summarize the error sources of the MDD method, which come from the measurement error of the magnetic field, the error in determination of the satellite relative position, simultaneity of the measurements made between different spacecraft, and truncation errors in the estimation of the matrix (spacecraft tetrahedron shape parameters, c.f. Robert et al. 1998 as a source of error are included in the truncation error) G=∇ B ⇀ , which have been discussed extensively before (e.g., Chanteur 1998; Harvey 1998; Robert et al. 1998; Denton et al. 2010, 2012). Now we discuss the truncation errors in G=∇ B ⇀ . When using the gradient based method for Cluster or MMS, the four spacecraft should each be simultaneously within the same structure and the separation of the spacecraft \(l_{\mathrm{sc}}\) must be much smaller than the scale of the structure \(l_{\mathrm{str}}\), i.e. \(l_{\mathrm{sc}}\ll l_{\mathrm{str}}\). In practice, we can look at the profiles of measured fields from different spacecraft in order to determine whether the separation is sufficiently small. An empirical way is to simply visually inspect the observed time-series in order to judge whether the profiles of fields measured by different spacecraft are near enough and bear some resemblance with each other, i.e., whether the four spacecraft lie within the same structure during the calculation interval. As illustrated in Fig. 22, two spacecraft are in the same structure and linear interpolation is valid for the measurements in Fig. 22a, but in Fig. 22b,c the two spacecraft are too far apart, and are not in the same structure, such that linear interpolation between them will fail. (a) Example that all spacecraft can be seen in a same structure and then the gradient can be calculated correctly. This is a magnetosheath flux rope event observed by Yao et al. (2018). (b–c) Examples showing that not all satellites are in the same structure and one cannot perform gradient based methods like MDD, Curlometer, STD, while Timing method can be performed This does not always mean, however, that the smaller spacecraft separations, the better the accuracy. As pointed by Robert et al. (1998), the relative measurement errors between every spacecraft pair become large at small separations, because the field measurement errors are nearly constant, and hence there should be an optimal separation distance. Considering the measurement error \(\delta B\), the smallest gradient of magnetic field should be \({\sim }\delta B/l_{\max } \), where the \(l_{\max } \) is the largest separation among spacecraft. In the panel of eigenvalues, we can add a dashed horizontal line to indicate \(\delta B/l_{\max } \), given \(\delta B = 0.05\) nT and \(l_{\max } \) depending on the real situation. Then if the square of the eigenvalues is lower than this line, we should be careful; see examples in Figs. 8, 16 and 18. Denton et al. (2010, 2012) discussed the MDD and STD methods and further developed them to study the magnetic reconnection point, considering various errors of the field data which may bring uncertainties in the calculation. First they tested these methods by using a magnetotail reconnection point obtained from numerical simulations assuming four virtual satellites passing through the structure. In these cases, they found that the characteristic directions of the reconnection point and the moving velocity of the reconnection point can be well determined. They considered two kinds of errors in magnetic field measurement that is worth our attention in real satellite data analysis: one is the digitization (noise) errors that randomly vary with respect to time, another is systematic inter-spacecraft calibration errors that are either constant or at least very slowly evolving in time. In general, the former is small and unlikely to be a problem, and the latter is larger and sometimes could reach 0.1 nT, in the cases of Cluster and MMS. To minimize the influence of the calibration error, Denton et al. (2010, 2012) suggested to use the gradient of the perturbed field δ(∇ B ⇀ )=∇ B ⇀ −〈∇ B ⇀ 〉 instead of that of the total field ∇ B ⇀ when carrying out analysis using MDD and STD. In their simulated case, they argue that the calculation can be improved, mainly for the intermediate and minimum directions. Teh et al. (2010) reconstructed a reconnection structure by solving the steady resistive MHD equations in two dimensions, with initial inputs of field and plasma data from a single spacecraft as it passes through it, using the velocity calculation by Denton et al. (2010). However, Denton et al. (2012) also mentioned that if the background has a spatial gradient itself, removing the background may lead to a systematic deviation of the calculated results. Therefore, caution must be made when we use this modified approach. Tian et al. (2019) have statistically tested the influence of spacecraft separation, noise/turbulence level, and tetrahedron shape on the accuracy of MDD results with the use of a 2D magnetic flux rope model. As shown in Fig. 23, the errors in characteristic directions from the MDD method are related to the noise/turbulence level, inter-spacecraft distance and spatial gradient of the structure. The noise is introduced as \(\Delta B_{j}^{\mathrm{NL}} =\mathrm{NL}\boldsymbol{\cdot } \langle \vert B \vert \rangle \boldsymbol{\cdot }\mathrm{RAND}()\), where \(j\) denotes the magnetic field component, the coefficient NL represents noise level and \(\langle \vert B \vert \rangle \) represents averaged magnetic field strength (e.g. Hu and Sonnerup 2002). RAND () generates normally distributed random numbers. Turbulence (natural noise) is introduced as \(\Delta B_{j}^{\mathrm{TL}} = \mathrm{TL} \cdot \langle \vert B \vert \rangle \cdot (0.4 R_{0} \cdot | k_{j} | ^{- \frac{5}{6}} )\cdot \cos [2 \pi \cdot \boldsymbol{k} \cdot \boldsymbol{r} _{{m}} ]\), where TL is a coefficient reflecting the ratio of the turbulence amplitude to the background value; \(R_{0}\) is the half width of the model flux rope; \(\boldsymbol{r}_{{m}}\) represents the position vector of the \(m\)th vertex (\(m=1\), 2, 3, 4) relative to the tetrahedron center. \(\boldsymbol{k}\) is wave vector with each component of \(\vert k_{j} \vert = \frac{1}{\lambda _{j}}\); at each moment the wavelength \(\lambda _{j},\ (j= x,y)\) is a random value satisfying \(\lambda _{j} =0.001 R_{0} +|\mathrm{rand} '()| \), where \(\mathrm{rand} '()\) generates normally distributed random numbers with mean value 0 and variance 0.2\(R_{0}\). The wavelength \(\lambda _{z}\) is determined by the divergence-free condition of magnetic field (Tian et al. 2019). Unlike noise, the turbulent magnetic field is divergence-free at any point, and its magnitude depends on the plasma environment but not on the accuracy of the instrument. Figure 23 shows that even with a magnetic field disturbance with level of 10%, MDD can still give robust dimensionality information as long as the length of the spacecraft tetrahedron is 0.1–1 times the structure radius. The angle of deviation from the axis predicted by MDD to the actual model axis for each operation when \(\Delta R <1R _{0}\) is shown in Fig. 23f. The percentage of points less than \(30^{\circ}\) (considered accurate) in the bins < 0.1, 0.1–0.3, 0.3–0.5, 0.5–0.7, 0.7–0.9, 0.9–1.1 and >1.1 are also plotted (circles connected by the curves; with values corresponding to the right-hand axis). It can be seen that the percentage is greater than 80% when \(| \nabla \cdot \boldsymbol{B} |/ | \nabla \times \boldsymbol{B} |<0.4\) or \(| \nabla \cdot \boldsymbol{B} |/\max(| \partial B _{i}/\partial j|) <0.6\). Therefore, for a given flux rope structure, 0.4 and 0.6 can be taken as (time independent) thresholds for these two parameters, respectively. The lower the parameters, the more accurate the MDD results when the separation is small. Influence of noise and turbulence on the MDD results for a flux rope model. (a) Cross section of the model flux rope. The black dot denotes the point where the axial direction is tested. (b–c) Distribution of \(\Delta \theta \) (invariant direction error of MDD calculation) versus noise (NL) or turbulence level (TL) and separation (\(\Delta R\)). (d, e) The distribution of the two quality indicators | ∇ ⋅ B ⇀ ∇ × B ⇀ | and | ∇ ⋅ B ⇀ | max ( | ∂ B i ∂ j | ) (i,j=x/y/z) versus TL and \(\Delta R\). (f) The relationship between \(\Delta \theta \) and the above parameters. Adapted from Tian et al. (2019) See text for detail One type of error in the STD method comes from our stationarity assumption, i.e., (\((\frac{\partial \vec{B}}{\partial t})_{\mathrm{str}}\sim 0\)), which means we can calculate its velocity only if the structure itself changes very slowly on the time scale of the motion. When we calculate V ⇀ str from the STD analysis, if the structure is steady, we have ( ∂ B ⇀ ∂ t ) str =−∇× E ⇀ str =0 in the rest frame of the structure. To test this assumption of steady state, we can calculate ∇× E ⇀ str in the structure frame and see if it is close to zero. Substituting E ⇀ str = E ⇀ sc + V ⇀ str × B ⇀ , we get ∇× E ⇀ str =∇× E ⇀ sc − B ⇀ (∇⋅ V ⇀ str )+ B ⇀ ⋅∇ V ⇀ str − V ⇀ str ⋅∇ B ⇀ . From \(( \frac{\partial \vec{B}}{ \partial t} )_{\mathrm{sc}} + \vec{V}_{\mathrm{str}} \cdot \nabla \vec{B} = 0\) (assume this is valid here) and ( ∂ B → ∂ t ) sc =−∇× E ⇀ sc , we get ∇× E ⇀ str =− B ⇀ (∇⋅ V ⇀ str )+ B ⇀ ⋅∇ V ⇀ str . So if V ⇀ str does not vary spatially, we should get ∇× E ⇀ str =0. This result is very reasonable: at a given moment if we can calculate V ⇀ str in many different positions and find it is homogeneous, then it is a coherent and steady structure. On the other hand, if V ⇀ str is inhomogeneous, we will see the expansion, compression or deformation and then it is not steady. How can we test whether V ⇀ str is spatially constant or not in practice? If we have far more than four satellites to calculate the V ⇀ str simultaneously at different places using each combination system of four satellites, it will be easy. However, if we only have one cluster of four satellites, we may need some assumptions. For example, if we observe V ⇀ str does not vary with time during the whole crossing of the structure, it is likely that the V ⇀ str is also spatially constant. Another possible way is to perform a reconstruction. Reconstruction methods have been proposed and applied based mainly on single point measurements, e.g., GS reconstruction as discussed in Sect. 2.2 or on multi-point measurements, e.g., first-order expansion (Wendel and Adrian 2013; Fu et al. 2015). Higher order reconstructions based on multi-point measurements can also be possibly performed even if we only have four satellites. Assuming that the structure is stationary we can obtain its velocity, and then we can deem the whole crossing time of the structure as one time moment so that we will obtain a system with many satellites in different positions of the structure. The relative distance of different positions can be derived from the structure velocity and time difference when the satellites reach different positions. Then we can perform 2nd to 3rd order (or even higher order depending on the point number) fits to get the whole field of the structure. Chanteur (1998) has proposed to use this approach to get the higher order gradient of the field. For 2-D or 3-D cases by this way of reconstruction one may not get accurate results perpendicular to the spacecraft trajectory. However, for the 1-D case, if the variant axis is along the trajectory, we may reconstruct the field of the structure more accurately. Then if we reconstruct the structure from data of two different parts (corresponding to two different time moments) to obtain a whole picture of the structure, we may compare the reconstruction at different times. If the differences between the two results are only minor, then it can be considered stationary, provided the reconstruction is sufficiently accurate. Of course this proposed approach is too idealized. A much better way is to launch tens of satellites and then we can make better reconstruction with higher order. Other sources of STD calculation error exist in the finite difference equations of (3.3), and generally, the truncation errors dominate, which give us two limitations of the STD method: \(l_{\mathrm{sc}} \ll l_{\mathrm{str}}\) and \(\Delta t V_{\mathrm{str}} \ll l_{\mathrm{str}}\) (see the analysis detail in Shi et al. 2006), where \(l_{\mathrm{sc}}\) is the spacecraft separation scale, \(l_{\mathrm{str}}\) is the structure's scale, and \(\Delta t\) is the time step which we used in calculating \(( \frac{\partial \vec{B}}{\partial t} )_{\mathrm{sc}}\). Because the accuracy of the magnetic field, \(\delta B_{i}\) (where \(B_{i}\) is one component of the magnetic field), is normally dominated by systematic inter-spacecraft calibration errors, which can reach 0.1 nT for Cluster and MMS as mentioned above, the field variation during the interval \(\Delta t\) should be larger than \(\delta B_{i}\), that is \(\delta B_{i} < \Delta t \vert ( \frac{ \partial B_{i}}{\partial t} )_{\mathrm{sc}} \vert \), so the \(\Delta t\) should satisfy \(\Delta t > \frac{\delta B_{i}}{ \vert ( \frac{\partial B_{i}}{\partial t} )_{\mathrm{sc}} \vert }\). Similarly, \(l_{\mathrm{sc}}\) should satisfy \(l_{\mathrm{sc}} > \frac{\delta B_{i}}{ \vert \frac{\partial B_{i}}{\partial l} \vert }\), where \(l \) is along a certain direction. Thus, the optimal \(\Delta t\) and \(l_{\mathrm{sc}}\) satisfy \(\frac{\delta B_{i}}{ \vert ( \frac{\partial B_{i}}{ \partial t} )_{\mathrm{sc}} \vert } < \Delta t \ll \frac{l_{\mathrm{str}}}{V _{\mathrm{str}}}\) and \(\frac{\delta B_{i}}{ \vert \frac{\partial B_{i}}{ \partial l} \vert } < l_{\mathrm{sc}} \ll l_{\mathrm{str}}\). Therefore, suitable \(\Delta t\) and \(l_{\mathrm{sc}}\) should be taken for different cases. Dimensionality (Dimension Number) for Different Field Quantities In previous sections we have only considered the dimensionality of magnetic field structures. However, if we consider other fields such as current density, electric field, or velocity field, we may find that they have different dimensionalities. That is, when the magnetic field is 1D, it does not guarantee that the other parameters are also 1D. Since the current density is J ⇀ =∇× B ⇀ , if ∂ B ⇀ /∂n=0, ∂ J ⇀ /∂n=0. So the dimension number of the current density should be equal to or smaller than that of the magnetic field. The dimension number can be smaller (e.g., 2-D for magnetic field but 1-D or constant for current density) because the current density is calculated from the spatial derivative of the magnetic field. Rezeau et al. (2018) argue that in low beta plasmas when the magnetic field controls all the other plasma parameters, one can deem that if magnetic field is 1-D, then all the other parameters will be 1-D, while in higher beta plasma, pressure effects are important, and it is not certain that the 1-D variations of \(B\) can ensure all the plasma parameters will be 1-D from the fluid momentum equations of both ions and electrons. We suspect that the opposite may actually be the case. In low beta plasma, the force balance (MHD momentum equation) is controlled by the magnetic terms only. Then it does not matter how velocity and pressure are distributed in space (they may be 2-D or 3-D). In Fig. 24 we plot the MDD results for velocity, convection electric field and magnetic field for a magnetopause crossing. We find that there is some slight difference in the normal for different parameters from the results of \(B\), \(V\) and \(E\) fields. When Beta is higher during the early phase, all three parameters appear to be 1-D, however when Beta is lower during the later phase, \(V\) and \(E\) appear to be 2-D structures, while the magnetic field appears to be 1-D. Although Rezeau et al. (2018) proposed a generalized MDD method using a combination of magnetic field and electric field to obtain the overall dimensionality of a structure, different parameters (physical quantities) could have different dimensionalities. MDD analysis on a magnetopause crossing events (Russell et al. 2017) observed by MMS at the dusk flank of the magnetosphere: first row shows the MDD analysis using magnetic field data, from top to bottom (a) GSM Bz observed by MMS1-4 along the trajectory; (b) square root of eigenvalues \(\lambda_{\mathrm{max}}\), \(\lambda_{\mathrm{mid}}\), and \(\lambda_{\min}\) of the matrix \(L\); (c) the Rezeau et al. dimensionality indices of the structure \(D1 = \frac{\sqrt{\lambda _{\max }} - \sqrt{\lambda _{\mathrm{mid}}}}{\sqrt{\lambda _{\max }}} \), \(D2 = \frac{\sqrt{\lambda _{\mathrm{mid}}} - \sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}} \), \(D3 = \frac{\sqrt{\lambda _{\min }}}{\sqrt{\lambda _{\max }}} \); (d) maximum derivative direction n ⇀ max ; second row shows the MDD analysis using ion velocity, from top to bottom (f) GSM Vz observed by MMS1-4 along the trajectory; (g)–(i) same format as (b)–(d); third row shows the MDD analysis using electric field data (\(\vec{E} = - \vec{V}_{\mathrm{ion}} \times \vec{B}\)); (k) GSM Ey observed by MMS1-4 along the trajectory; (l)–(n) same format as (b)–(d). (e), (j), and (o) all show the same plasma beta We should also note that the dimensionality may be related to the coordinate system we use. For example, for an axially symmetrical structure (like some kinds of flux ropes), in D-based Cartesian coordinates, it is 2-D as \(B=B(x,y)\) if \(z\) is the invariant axis. However, if we use a cylindrical coordinate system, we find that the field only varies in the r direction, i.e., \(B=B(r)\). Then from this point of view, the structure turns out to be 1-D. Comparison of Various Methods In this section we will compare and contrast between the methods discussed in previous sections, and try to give a description of where they can be best applied. We emphasize here that there are not 'better' or 'worse' methods. Different methods will have their best application in different circumstances. In many cases we may use different methods at the same time to compare and obtain a more reliable coordinate system and reference frame. First we discuss the difference between MVA and MDD. As emphasized by Sonnerup and Scheible (1998), \(\lambda _{3}\ll \lambda _{2}\) from the MVA method does not automatically indicate that a 1-D current layer has been traversed and for a 2-D structure, one cannot necessarily conclude that the minimum variance direction is along with the invariant axis. In other words, the fact that one or two eigenvalues are equal to zero in the MVAB method is not a sufficient condition for one or two-dimensionality (also see Dunlop and Woodward 1998; Dunlop et al. 2002). This means that one may not directly use the MVA method to determine the dimension number or invariant axis orientation of a structure observed. In the MDD analysis, if we find an eigenvalue equal to zero, it follows that ( ∂ B ⇀ / ∂ n ) 2 = ( ∂ B x / ∂ n ) 2 + ( ∂ B y / ∂ n ) 2 + ( ∂ B z / ∂ n ) 2 =0, which means the derivatives of \(B_{x}\), \(B_{y}\), and \(B_{z}\) along this eigenvector direction n ⇀ should all be zero. This shows that the MDD method can provide a sufficient condition for one or two-dimensionality of a structure and then give the invariant axis directions at the same time. Although MVAB and MDD are both looking for a coordinate system to simplify the problem, the MVA method is seeking for the extreme variation value of \(B_{n}^{2}\), and the MDD approach is seeking for the extremum of \(|\partial \vec{B}/\partial n|^{2}\). So the extreme values and eigenvector directions obtained by the two methods are often different. We can schematically show the axis difference obtained by MDD and MVA analysis for the simplest 1-D case, as plotted in Fig. 25. For a current sheet in which the magnetic field on one side of the current sheet is antiparallel to the field on the other side, the magnetic field can be written as B → = B x 0 tanh ( z L 0 ) e ⇀ x . In the MDD analysis for this kind of case, as also calculated in Sect. 2.4.2, n ⇀ max will be along the \(z\) direction, and the other two axes can be any two orthogonal directions in the \(xy\) plane. If we calculate the MVAB eigenvalues, a maximum eigenvalue \(\lambda _{\max } \) will be found which corresponds to the \(x\) direction, because the variation of \(B_{x}\) throughout the crossing is the largest, and the other two eigenvalues \(\lambda _{\mathrm{mid}}\) and \(\lambda _{\min } \) are close to zero (may not be strictly zero because of numerical errors) which corresponds to any two perpendicular directions in \(yz\) plane. The above conclusion is true even if we add a constant \(B_{y0}\) and \(B_{z0}\) in the magnetic field, such as B → = B x 0 tanh ( z L 0 ) e ⇀ x + B y 0 e ⇀ y + B z 0 e ⇀ z , from which we can easily see that the normal of the current sheet is still along the z direction while the field quantities along x and y plane do not vary, still indicating a 1-D structure from the definition of dimensionality, although all three field components exist. Then the MDD results will be the same as those shown in Fig. 25, and the MVA results remain the same. In Table 1 we show the MVAB results for these two cases, which suggests that for this special 1-D current sheet MVAB may not distinguish well between the mid and min directions to find the normal. Therefore, in data analysis, we can use HMVA (hybrid MVA) as suggested by Gosling and Phan (2013) and recently used by, e.g., Hietala et al. (2018) in situations where it is hard to distinguish the \(M\) and \(N\) directions but \(L\) is sufficiently clear. A schematic showing the right handed coordinate systems given by (a) the MVA method; (b) the MDD method for a Harris current sheet without guide field Table 1 Summary of MVAB result of 1D current sheet of (4.2). Random errors on the order of 0.01 nT are added to the magnetic field data to resemble real data and to avoid the singularity in calculating the eigenvalues of the matrix. Three runs have been carried out for two sets of parameters of the model. We can find that in the same model the eigenvalue and eigenvector of the intermediate and minimum directions are quite different every time, while the eigenvalue and eigenvector of maximum direction remain the same On the other hand, in many observational events, e.g., in the magnetopause current sheet, using MVA we can often find the normal direction very accurately. This could be due to the fact that the current sheets are seldom as ideal as (4.2) shows, i.e., the tangential field perpendicular to the background field direction (\(L\) direction) is seldom constant and then it might still have variations, since 1-D magnetic field structures only require the field along the normal to be constant. When we slightly revise the field model, adding a non-constant \(B_{y}\) across the current sheet, like B → = B x 0 tanh ( z L 0 ) e ⇀ x + B y 0 sech ( z L 1 ) e ⇀ y + B z 0 e ⇀ z , which is still a 1-D field, we then can separate \(\lambda _{\mathrm{mid}}\) and \(\lambda _{\min } \) in MVAB, even if \(B_{y}\) is very small (here we set \(B_{y0} = 0.1B_{x0}\)). Then the minimum and medium eigenvalues can be well separated and \(\mathit{Nmin}\) in MVAB is closer to the \(z\) axis as we expected, see results in Table 2. Table 2 Summary of MVAB result of 1D current sheet as in (4.3). For this current sheet which is closer to real data in the magnetopause or magnetotail current sheet, MVA can distinguish between the mid and min directions and then obtain the correct normal In Table 3 we list the ability of various methods in solving different issues. For example, some methods need a quasi-stationary assumption while some do not; some can obtain instantaneous results to see the variation of direction or frame velocity, while some can obtain only one velocity using the data for the whole crossing; some have a presumed dimension number, some do not need this assumption. Table 3 Capabilities and requirements of various methods in solving different issues Potential Applications of Gradient Based Methods in Simulation Data Analysis and Other Problems The MDD and STD methods can be effectively used in numerical simulations. In the analysis of simulated data, we can calculate dimension number, characteristic directions and velocities of simulated magnetic field structures (such as plasmoid, FTE, magnetopause current sheet, bow shock, and magnetotail current sheet). It can also be very convenient to automatically calculate the time variation of velocity and direction for the simulated structure. Compared to satellite data analysis, it is more convenient to use the methods in analyzing numerical simulation data. Firstly, in principle it is not restricted by satellite separation and the precision of the results can be highly improved because the grid points in the numerical simulation must be close enough to meet the requirement that their spacing is much smaller than the structure scale. Second, the number of 'satellites' can be unlimited: one mesh point corresponds to one satellite. So we can calculate the structure's direction and velocity distribution in the whole simulation domain. Third, unlike the situation we met in real data analysis, there are always measured points in the structure to calculate velocity and direction. Fourth, the dimension number used in the numerical simulation can be tested. As described above, the definition of dimension in MDD is exactly the same as that in numerical simulation. For example, if the structure is simulated in 3-dimensions but it is examined by the MDD analysis to be a two-dimensional structure, then we may switch to 2-D simulation for this case, greatly reducing the computer simulation time. This can be done automatically by some computer programs. The studies of Denton et al. (2010, 2012) are the first attempt to use these methods in a numerical simulation, although they used only 4 points to make calculations. Figure 26 shows a distribution of maximum eigenvalues from MDD of the magnetic field in a global MHD simulation, from which we can easily find the magnetopause and the current sheet. Spatial distribution of \(\lambda _{\max}\) from MDD of the magnetic field in a global MHD simulation In laboratory plasmas, if we have multi point magnetic probes, in principle we can also apply the same techniques to attain the proper frame and coordinate systems. This could be useful in the analysis of transient convective MHD instabilities. By the way, it is well known that many everyday objects in our world produce a magnetic field, such as vehicles, aircraft and so on. When they are moving, they carry magnetic field along with them. So using the STD method we can also measure the movement velocity of magnetic field and then we will know the speed of the object. If the magnetic field of objects does not vary with time, the first term in the right hand side of equation (1.1) is strictly equal to 0. For example, for a bar magnet as discussed in Sect. 3.4.2, its magnetic field is a dipole magnetic field, and the magnetic field on it does not vary with time. If it moves at some given velocity and we calculate the velocity using the STD method, we find our calculation is quite consistent with the given velocity (not shown here). The magnetic field of real magnetic objects may be complicated by including dipole magnetic field, quadrupole magnetic field and higher order terms. In principle, the STD method should work, and the further investigation is still ongoing. More work is required to test if this is practically useful. First we need to put a huge number of magnetometers in space (atmosphere). It might not work if magnetometers are installed on or near vehicles, because they may observe a time-independent field (in the vehicle rest frame) under a steady state condition. There may also be substantial electromagnetic radiation from the vehicle that cannot be neglected. More work should be done to investigate the feasibility of this idea. V. Angelopoulos et al., The Space Physics Environment Data Analysis System (SPEDAS). Space Sci. Rev. 215, 9 (2019). https://doi.org/10.1007/s11214-018-0576-4 J. Birn, M. Hesse, Geospace Environment Modeling (GEM) magnetic reconnection challenge: resistive tearing, anisotropic pressure and Hall effects. J. Geophys. Res. 106(A3), 3737–3750 (2001). https://doi.org/10.1029/1999JA001001 J.L. Burch, T.D. Phan, Magnetic reconnection at the dayside magnetopause: advances with MMS. Geophys. Res. Lett. 43(16), 8327–8338 (2016). https://doi.org/10.1002/2016gl069787 L.F. Burlaga, J.K. Chao, Reverse and forward slow shocks in the solar wind. J. Geophys. Res. 76(31), 7516–7521 (1971). https://doi.org/10.1029/JA076i031p07516 G. Chanteur, Spatial interpolation for four spacecraft: theory, in Analysis Methods for Multi-Spacecraft Data, ed. by G. Paschmann, P.W. Daly (Int. Space Sci. Inst., Bern, 1998), pp. 349–369 G. Chanteur, C.C. Harvey, in Spatial Interpolation for Four Spacecraft: Application to Magnetic Gradients, ed. by G. Paschmann, P.W. Daly (Int. Space Sci. Inst., Bern, 1998), pp. 349–369 W. Daughton, V. Roytershteyn, B.J. Albright, H. Karimabadi, L. Yin, K.J. Bowers, Transition from collisional to kinetic regimes in large-scale reconnection layers. Phys. Rev. Lett. 103(6) (2009). https://doi.org/10.1103/physrevlett.103.065004 J.M. Dawson, Particle simulation of plasmas. Rev. Mod. Phys. 55, 403–447 (1983) F. De Hoffmann, E. Teller, Magneto-hydrodynamic shocks. Phys. Rev. 80(4), 692–703 (1950). https://doi.org/10.1103/physrev.80.692 ADS MathSciNet Article MATH Google Scholar R.E. Denton, B.U.Ö. Sonnerup, J. Birn, W.-L. Teh, J.F. Drake, M. Swisdak, M. Hesse, W. Baumjohann, Test of methods to infer the magnetic reconnection geometry from spacecraft data. J. Geophys. Res. 115, A10242 (2010). https://doi.org/10.1029/2010JA015420 R.E. Denton, B.U.Ö. Sonnerup, M. Swisdak, J. Birn, J.F. Drake, M. Hesse, Test of Shi et al. method to infer the magnetic reconnection geometry from spacecraft data: MHD simulation with guide field and antiparallel kinetic simulation. J. Geophys. Res. 117, A09201 (2012). https://doi.org/10.1029/2012JA017877 R.E. Denton, B.U.Ö. Sonnerup, H. Hasegawa, T.-D. Phan, C. Russel, R.J. Strangeway, B.L. Giles, D.J. Gershman, R.B. Torbert, Motion of the MMS spacecraft relative to the magnetic reconnection structure observed on 16 Oct. 2015 at 1307 UT. Geophys. Res. Lett. 43, 5589–5596 (2016). https://doi.org/10.1002/2016GL069214 R.E. Denton, B.U.Ö. Sonnerup, C.T. Russell, H. Hasegawa, T.-D. Phan, R.J. Strangeway, B.L. Giles, R.E. Ergun, P.-A. Lindqvist, R.B. Torbert, J.L. Burch, S.K. Vines, Determining \(L\)–\(M\)–\(N\) current sheet coordinates at the magnetopause from magnetospheric multiscale data. J. Geophys. Res. Space Phys. 123, 2274–2295 (2018). https://doi.org/10.1002/2017JA024619 M.W. Dunlop, T.I. Woodward, Multi-spacecraft discontinuity analysis: orientation and motion, in Analysis Methods for Multi-Spacecraft Data, ed. by G. Paschmann, P.W. Daly (Int. Space Sci. Inst, Bern, 1998), pp. 271–305 M.W. Dunlop, A. Balogh, K.-H. Glassmeier, Four-point cluster application of magnetic field analysis tools: the discontinuity analyzer. J. Geophys. Res. 107(A11), 1385 (2002). https://doi.org/10.1029/2001JA005089 J.P. Eastwood, T.D. Phan, P.A. Cassak et al., Ion-scale secondary flux ropes generated by magnetopause reconnection as resolved by MMS. Geophys. Res. Lett. 43, 4716–4724 (2016). https://doi.org/10.1002/2016GL068747 R.C. Elphic, C.T. Russell, Evidence for helical kink instability in the Venus magnetic flux ropes. Geophys. Res. Lett. 10(6), 459–462 (1983). https://doi.org/10.1029/gl010i006p00459 C.P. Escoubet, M. Fehringer, M. Goldstein, The cluster mission. Ann. Geophys. 19, 1197 (2001) H.S. Fu, A. Vaivads, Y.V. Khotyaintsev, V. Olshevsky, M. André, J.B. Cao, S.Y. Huang, A. Retinò, G. Lapenta, How to find magnetic nulls and reconstruct field topology with MMS data? J. Geophys. Res. Space Phys. 120(5), 3758–3782 (2015). https://doi.org/10.1002/2015ja021082 K.J. Genestreti, T.K.M. Nakamura, R. Nakamura, R.E. Denton, R.B. Torbert, J.L. Burch, F. Plaschke, S.A. Fuselier, R.E. Ergun, B.L. Giles, C.T. Russell, How accurately can we measure the reconnection rate EM for the MMS diffusion region event of 11 July 2017? J. Geophys. Res. Space Phys. 123, 9130–9149 (2018). https://doi.org/10.1029/2018JA025711. J.T. Gosling, T.D. Phan, Magnetic reconnection in the solar wind at current sheets associated with extremely small field shear angles. Astrophys. J. Lett. 763, L39 (2013). https://doi.org/10.1088/2041-8205/763/2/L39 S. Haaland, B.U.Ö. Sonnerup, M.W. Dunlop, E. Georgescu, G. Paschmann, B. Klecker, A. Vaivads, Orientation and motion of a discontinuity from cluster curlometer capability: minimum variance of current density. Geophys. Res. Lett. 31(31), 377–393 (2004). https://doi.org/10.1029/2004gl020001 M. Hartinger, V. Angelopoulos, M.B. Moldwin, K.-H. Glassmeier, Y. Nishimura, Global energy transfer during a magnetospheric field line resonance. Geophys. Res. Lett. 38, L12101 (2011). https://doi.org/10.1029/2011GL047846 C.C. Harvey, Spatial gradients and the volumetric tensor, in Analysis Methods for Multi-Spacecraft Data, ed. by G. Paschmann, P.W. Daly (Int. Space Sci. Inst., Bern, 1998), pp. 307–322 H. Hasegawa, B.U.Ö. Sonnerup, M.W. Dunlop, A. Balogh, S.E. Haaland, B. Klecker, G. Paschmann, B. Lavraud, I. Dandouras, H. Rème, Reconstruction of two-dimensional magnetopause structures from cluster observations: verification of method. Ann. Geophys. 22, 1251–1266 (2004). https://doi.org/10.5194/angeo-22-1251-2004 H. Hasegawa, B.U.Ö. Sonnerup, B. Klecker, G. Paschmann, M.W. Dunlop, H. Rème, Optimal reconstruction of magnetopause structures from cluster data. Ann. Geophys. 23(3), 973–982 (2005). https://doi.org/10.5194/angeo-23-973-2005 H. Hasegawa, B.U.Ö. Sonnerup, C.J. Owen, B. Klecker, G. Paschmann, A. Balogh, H. Rème, The structure of flux transfer events recovered from cluster data. Ann. Geophys. 24, 603–618 (2006). https://doi.org/10.5194/angeo-24-603-2006 H. Hasegawa, B.U.Ö. Sonnerup, M. Fujimoto, Y. Saito, T. Mukai, Recovery of streamlines in the flank low-latitude boundary layer. J. Geophys. Res. 112, A04213 (2007). https://doi.org/10.1029/2006JA012101 H. Hasegawa, B.U.Ö. Sonnerup, S. Eriksson, T.K.M. Nakamura, H. Kawano, Dual-spacecraft reconstruction of a three-dimensional magnetic flux rope at the Earth's magnetopause. Ann. Geophys. 33, 169–184 (2015). https://doi.org/10.5194/angeo-33-169-2015 H. Hasegawa, B.U.Ö. Sonnerup, R.E. Denton, T.-D. Phan, T.K.M. Nakamura, B.L. Giles, D.J. Gershman, J.C. Dorelli, J.L. Burch, R.B. Torbert, C.T. Russell, R.J. Strangeway, P.-A. Lindqvist, Y.V. Khotyaintsev, R.E. Ergun, P.A. Cassak, N. Kitamura, Y. Saito, Reconstruction of the electron diffusion region observed by the magnetospheric multiscale spacecraft: first results. Geophys. Res. Lett. 44, 4566–4574 (2017). https://doi.org/10.1002/2017GL073163 H. Hasegawa, R.E. Denton, R. Nakamura, K.J. Genestreti, T.K.M. Nakamura, K.-J. Hwang et al., Reconstruction of the electron diffusion region of magnetotail reconnection seen by the MMS spacecraft on 11 July 2017. J. Geophys. Res. Space Phys. 124, 122–138 (2019). https://doi.org/10.1029/2018JA026051 L.-N. Hau, B.U.Ö. Sonnerup, Two-dimensional coherent structures in the magnetopause: recovery of static equilibria from single-spacecraft data. J. Geophys. Res. Space Phys. 104(A4), 6899–6917 (1999) H. Hietala, T.D. Phan, V. Angelopoulos, M. Oieroset, M.O. Archer, T. Karlsson, F. Plaschke, In situ observations of a magnetosheath high-speed jet triggering magnetopause reconnection. Geophys. Res. Lett. 45(4), 1732–1740 (2018). https://doi.org/10.1002/2017gl076525 Q. Hu, B.U.Ö. Sonnerup, Reconstruction of magnetic clouds in the solar wind: orientations and configurations. J. Geophys. Res. 107(A7) (2002) A.V. Khrabrov, B.U.Ö. Sonnerup, Orientation and motion of current layers: minimization of the Faraday residue. Geophys. Res. Lett. 25, 2373 (1998a) A.V. Khrabrov, B.U.Ö. Sonnerup, DeHoffmann–Teller analysis, in Analysis Methods for Multi-Spacecraft Data, vol. SR-001, ed. by G. Paschmann, P. Daly (International Space Science Institute, Bern, 1998b), pp. 221–248, Chap. 9 M.G. Kivelson, C.T. Russell, Introduction to Space Physics (Cambridge University Press, Cambridge, 1995) T. Knetter, A new perspective of the solar wind micro-structure due to multi-point observations of discontinuities. Ph.D. thesis (2005) T. Knetter, F.M. Neubauer, T. Horbury, A. Balogh, Four-point discontinuity observations using Cluster magnetic field data: a statistical survey. J. Geophys. Res. 109(A6), A06102 (2004). https://doi.org/10.1029/2003JA010099 R.P. Lepping, J.A. Jones, L.F. Burlaga, Magnetic field structure of interplanetary magnetic clouds at 1 AU. J. Geophys. Res. 95, 11,957–11,965 (1990) Y. Lin, D.W. Swift, A two-dimensional hybrid simulation of the magnetotail reconnection layer. J. Geophys. Res. 101(A9), 19859–19870 (1996). https://doi.org/10.1029/96JA01457 Y. Ling, Q.Q. Shi, X.-C. Shen, A.M. Tian, W. Li, B.B. Tang, A.W. Degeling, H. Hasegawa, M. Nowada, H. Zhang, I.J. Rae, Q.-G. Zong, S.Y. Fu, A.N. Fazakerley, Z.Y. Pu, Observations of Kelvin–Helmholtz waves in the Earth's magnetotail near the lunar orbit. J. Geophys. Res. Space Phys. 123, 3836–3847 (2018). https://doi.org/10.1029/2018JA025183 V. Olshevsky, A. Divin, E. Eriksson, S. Markidis, G. Lapenta, Energy dissipation in magnetic null points at kinetic scales. Astrophys. J. 807(2), 155 (2015). https://doi.org/10.1088/0004-637x/807/2/155 G. Paschmann, B.U.Ö. Sonnerup, Proper Frame Determination and Walén Test, ISSI Scientific Rep., vol. 8 (2008), pp. 65–74 G. Paschmann, S. Haaland, B.U.Ö. Sonnerup, H. Hasegawa, E. Georgescu, B. Klecker, T.D. Phan, H. Rème, A. Vaivads, Characteristics of the near-tail dawn magnetopause and boundary layer. Ann. Geophys. 23(4), 1481–1497 (2005). https://doi.org/10.5194/angeo-23-1481-200 T.D. Phan, J.P. Eastwood, P.A. Cassak et al., MMS observations of electron-scale filamentary currents in the reconnection exhaust and near the X line. Geophys. Res. Lett. 43, 6060–6069 (2016). https://doi.org/10.1002/2016GL069212 F. Plaschke, T. Karlsson, H. Hietala, M. Archer, Z. Vörös, R. Nakamura, W. Magnes, W. Baumjohann, R.B. Torbert, C.T. Russell, B.L. Giles, Magnetosheath high-speed jets: internal structure and interaction with ambient plasma. J. Geophys. Res. Space Phys. 122, 10,157–10,175 (2017). https://doi.org/10.1002/2017JA024471 Z.-Y. Pu, M.G. Kivelson, Kelvin:Helmholtz instability at the magnetopause: solution for compressible plasmas. J. Geophys. Res. 88(A2), 841 (1983). https://doi.org/10.1029/ja088ia02p00841 Z.Y. Pu et al., Multiple magnetic reconnection events observed by Cluster II, in Proceedings on Magnetic Reconnection Meeting, ed. by R. Lundin, R. McGregor. IRF Sci. Rep., vol. 280 Kiruna, Sweden, September 2002 (2003), pp. 60–64, Swed. Inst. of Space Phys. Z.Y. Pu, Q.-G. Zong, T.A. Fritz, C.J. Xiao, Z.Y. Huang, S.Y. Fu, Q.Q. Shi, M.W. Dunlop, K.-H. Glassmeier, A. Balogh, P. Daly, H. Reme, J. Dandouras, J.B. Cao, Z.X. Liu, C. Shen, J.K. Shi, Multiple flux rope events at the high-latitude magnetopause: cluster/rapid observation on 26 January, 2001. Surv. Geophys. 26(1–3), 193–214 (2005). https://doi.org/10.1007/s10712-005-1878-0 L. Rezeau, G. Belmont, R. Manuzzo, N. Aunai, J. Dargent, Analyzing the magnetopause internal structure: new possibilities offered by MMS tested in a case study. J. Geophys. Res. Space Phys. 123(1), 227–241 (2018). https://doi.org/10.1002/2017ja024526 P. Robert, M.W. Dunlop, A. Roux, G. Chanteur, Accuracy of current density determination, in Analysis Methods for Multi-Spacecraft Data, vol. 398, ed. by G. Paschmann, P.W. Daly (International Space Science Institute, Bern, 1998), pp. 395–418 Z.J. Rong, W.X. Wan, C. Shen, T.L. Zhang, A.T.Y. Lui, Y. Wang, M.W. Dunlop, Y.C. Zhang, Q.-G. Zong, Method for inferring the axis orientation of cylindrical magnetic flux rope based on single-point measurement. J. Geophys. Res. Space Phys. 118, 271–283 (2013). https://doi.org/10.1029/2012JA018079 C.T. Russell, A study of flux transfer events at different planets. Adv. Space Res. 16(4), 159–163 (1995) C.T. Russell, M.M. Hoppe, W.A. Livesey, J.T. Gosling, S.J. Bame, ISEE-1 and -2 observations of laminar bow shocks: velocity and thickness. Geophys. Res. Lett. 9, 1171–1174 (1982). https://doi.org/10.1029/GL009i010p01171 C.T. Russell, M.M. Mellott, E.J. Smith, J.H. King, Multiple spacecraft observations of interplanetary shocks: four spacecraft determination of shock normals. J. Geophys. Res. 88(A6), 4739 (1983) C.T. Russell, R.J. Strangeway, C. Zhao, B.J. Anderson, W. Baumjohann, K.R. Bromund, D. Fischer, L. Kepko, G. Le, W. Magnes, R. Nakamura, F. Plaschke, J.A. Slavin, R.B. Torbert, T.E. Moore, W.R. Paterson, C.J. Pollock, J.L. Burch, Structure, force balance, and topology of Earth's magnetopause. Science 356(6341), 960–963 (2017). https://doi.org/10.1126/science.aag3112 S.J. Schwartz, Shock and discontinuity normals, mach number, and related parameters, in Analysis Methods for Multi-Spacecraft Data, ed. by G. Paschmann, P.W. Daly (ESA Publications Division, Noordwijk, 1998) C. Shen, X. Li, M. Dunlop, Q.Q. Shi, Z.X. Liu, E. Lucek, Z.Q. Chen, Magnetic field rotation analysis and the applications. J. Geophys. Res. 112, A06211 (2007). https://doi.org/10.1029/2005JA011584 Q.Q. Shi, C. Shen, Z.Y. Pu, M.W. Dunlop, Q.-G. Zong, H. Zhang, C.J. Xiao, Z.X. Liu, A. Balogh, Dimensional analysis of observed structures using multi-point magnetic field measurements: application to cluster. Geophys. Res. Lett. 32, L12105 (2005). https://doi.org/10.1029/2005GL022454 Q.Q. Shi, C. Shen, M.W. Dunlop, Z.Y. Pu, Q.-G. Zong, Z.X. Liu, E. Lucek, A. Balogh, Motion of observed structures calculated from multi-point magnetic field measurements: application to cluster. Geophys. Res. Lett. 33, L08109 (2006). https://doi.org/10.1029/2005GL025073 Q.Q. Shi, Z.Y. Pu, J. Soucek, Q.-G. Zong, S.Y. Fu, L. Xie, Y. Chen, H. Zhang, L. Li, L.D. Xia, Z.X. Liu, E. Lucek, A.N. Fazakerley, H. Reme, Spatial structures of magnetic depression in the Earth's high-altitude cusp: cluster multipoint observations. J. Geophys. Res. 114, A10202 (2009a). https://doi.org/10.1029/2009JA014283 Q.Q. Shi, Q.-G. Zong, H. Zhang, Z.Y. Pu, S.Y. Fu, L. Xie, Y.F. Wang, Y. Chen, L. Li, L.D. Xia, Z.X. Liu, A.N. Fazakerley, H. Reme, E. Lucek, Cluster observations of the entry layer equatorward of the cusp under northward interplanetary magnetic field. J. Geophys. Res. 114, A12219 (2009b). https://doi.org/10.1029/2009JA014475 Q.Q. Shi, Q.-G. Zong, S.Y. Fu, M.W. Dunlop, Z.Y. Pu, G.K. Parks, Y. Wei, W.H. Li, H. Zhang, M. Nowada, Y.B. Wang, W.J. Sun, T. Xiao, H. Reme, C. Carr, A.N. Fazakerley, E. Lucek, Solar wind entry into the high-latitude terrestrial magnetosphere during geomagnetically quiet times. Nat. Commun. 4(1), 1466 (2013). https://doi.org/10.1038/ncomms2476 Q.Q. Shi, M.D. Hartinger, V. Angelopoulos, A.M. Tian, S.Y. Fu, Q.-G. Zong, J.M. Weygand, J. Raeder, Z.Y. Pu, X.Z. Zhou, M.W. Dunlop, W.L. Liu, H. Zhang, Z.H. Yao, X.C. Shen, Solar wind pressure pulse-driven magnetospheric vortices and their global consequences. J. Geophys. Res. Space Phys. 119(6), 4274–4280 (2014). https://doi.org/10.1002/2013ja019551 P. Song, C.T. Russell, Time series data analyses in space physics. Space Sci. Rev. 87, 387–463 (1999) B.U.Ö. Sonnerup, L.J. Cahill, Magnetopause structure and attitude from explorer 12 observations. J. Geophys. Res. 72(1), 171 (1967) B.U.Ö. Sonnerup, M. Guo, Magnetopause transects. Geophys. Res. Lett. 23, 3679 (1996). https://doi.org/10.1029/96gl03573 B.U.Ö. Sonnerup, H. Hasegawa, Orientation and motion of two-dimensional structures in a space plasma. J. Geophys. Res. 110, A06208 (2005). https://doi.org/10.1029/2004JA010853 B.U.Ö. Sonnerup, M. Scheible, Minimum and maximum variance analysis, in Analysis Methods for MultiSpacecraft Data, ed. by G. Paschmann, P.W. Daly (Int. Space Sci. Inst./Eur. Space Agency, Bern/Paris, 1998), pp. 185–220, Chap. 8 B.U.Ö. Sonnerup, W.-L. Teh, Reconstruction of two-dimensional coherent MHD structures in a space plasma: the theory. J. Geophys. Res. 113, A05202 (2008). https://doi.org/10.1029/2007JA012718 B.U.Ö. Sonnerup, S. Haaland, G. Paschmann, B. Lavraud, M.W. Dunlop, H. Rème, A. Balogh, Orientation and motion of a discontinuity from single-spacecraft measurements of plasma velocity and density: minimum mass flux residue. J. Geophys. Res. 109, A03221 (2004). https://doi.org/10.1029/2003JA010230 B.U.Ö. Sonnerup, H. Hasegawa, W.-L. Teh, L.-N. Hau, Grad–Shafranov reconstruction: an overview. J. Geophys. Res. 111(A9) (2006). https://doi.org/10.1029/2006JA011717 B.U.Ö. Sonnerup, S. Haaland, G. Paschmann, M.W. Dunlop, H. Rème, A. Balogh, Orientation and motion of a plasma discontinuity from single-spacecraft measurements: generic residue analysis of cluster data. J. Geophys. Res. 111, A05203 (2007). https://doi.org/10.1029/2005JA011538 B.U.Ö. Sonnerup, R.E. Denton, H. Hasegawa, M. Swisdak, Axis and velocity determination for quasi two-dimensional plasma/field structures from Faraday's law: a second look. J. Geophys. Res. Space Phys. 118, 2073–2086 (2013). https://doi.org/10.1002/jgra.50211 B.U.Ö. Sonnerup, H. Hasegawa, R.E. Denton, T.K.M. Nakamura, Reconstruction of the electron diffusion region. J. Geophys. Res. Space Phys. 121, 4279–4290 (2016). https://doi.org/10.1002/2016JA022430 W. Sun, Q. Shi, S. Fu, Q. Zong, Z. Pu, L. Xie, T. Xiao, L. Li, Z. Liu, H. Reme, E. Lucek, Statistical research on the motion properties of the magnetotail current sheet: cluster observations. Sci. China, Technol. Sci. 53, 1732–1738 (2010). https://doi.org/10.1007/s11431-010-3153-y W.-L. Teh, B.U.Ö. Sonnerup, First results from ideal 2-D MHD reconstruction: magnetopause reconnection event seen by cluster. Ann. Geophys. 26(9), 2673–2684 (2008). https://doi.org/10.5194/angeo-26-2673-2008 W.-L. Teh, B.U.Ö. Sonnerup, L.-N. Hau, Grad–Shafranov reconstruction with field-aligned flow: first results. Geophys. Res. Lett. 34, L05109 (2007). https://doi.org/10.1029/2006GL028802 W.-L. Teh, B.U.Ö. Sonnerup, J. Birn, R.E. Denton, Resistive MHD reconstruction of two-dimensional coherent structures in space. Ann. Geophys. 28(11), 2113–2125 (2010). https://doi.org/10.5194/angeo-28-2113-2010 T. Terasawa, Hall current effect on tearing mode instability. Geophys. Res. Lett. 10(6), 475–478 (1983) T. Terasawa, H. Kawano, I. Shinohara, T. Mukai, Y. Saito, M. Hoshino, A. Nishida, S. Machida, T. Nagai, T. Yamamoto, S. Kokubun, On the determination of a moving MHD structure: minimization of the residue of integrated Faraday's equation. J. Geomagn. Geoelectr. 48(5–6), 603–614 (1996) A.M. Tian, Q.-G. Zong, Y.F. Wang, Q.Q. Shi, S.Y. Fu, Z.Y. Pu, A series of plasma flow vortices in the tail plasma sheet associated with solar wind pressure enhancement. J. Geophys. Res. 115, A09204 (2010). https://doi.org/10.1029/2009JA014989 A.M. Tian, Q.-G. Zong, Q.Q. Shi, Reconstruction of morningside plasma sheet compressional ULF Pc5 wave. Sci. China, Technol. Sci. 55, 1092–1100 (2012). https://doi.org/10.1007/s11431-011-4735-z A.M. Tian, Q.Q. Shi, Q.-G. Zong, J. Du, S.Y. Fu, Y.N. Dai, Analysis of magnetotail flux rope events by ARTEMIS observations. Sci. China, Technol. Sci. 57, 1010–1019 (2014). https://doi.org/10.1007/s11431-014-5489-1007/s11431-014-1 A. Tian, Q. Shi, A.W. Degeling, S. Zhang, Study of magnetic flux ropes by multi-spacecraft analysis method and GS method. Sci. China Tech. Sci. (2019 submitted) R.B. Torbert, J.L. Burch, T.D. Phan, et al., Electron-scale dynamics of the diffusion region during symmetric magnetic reconnection in space. Science 362, 1391–1395 (2018). https://doi.org/10.1126/science.aat2998 J. Vogt, S. Haaland, G. Paschmann, Accuracy of multi-point boundary crossing time analysis. Ann. Geophys. 29, 2239–2252 (2011). https://doi.org/10.5194/angeo-29-2239-2011 D.E. Wendel, M.L. Adrian, Current structure and nonideal behavior at magnetic null points in the turbulent magnetosheath. J. Geophys. Res. Space Phys. 118, 1571–1588 (2013). https://doi.org/10.1002/jgra.50234 C.J. Xiao, Z.Y. Pu, Z.W. Ma, S.Y. Fu, Z.Y. Huang, Q.G. Zong, Inferring of flux rope orientation with the minimum variance analysis technique. J. Geophys. Res. 109, A11218 (2004). https://doi.org/10.1029/2004JA010594 T. Xiao, H. Zhang, Q.Q. Shi, Q.-G. Zong, S.Y. Fu, A.M. Tian, W.J. Sun, S. Wang, G.K. Parks, S.T. Yao, H. Rème, I. Dandouras, Propagation characteristics of young hot flow anomalies near the bow shock: cluster observations. J. Geophys. Res. Space Phys. 120, 4142–4154 (2015). https://doi.org/10.1002/2015JA021013 Y.Y. Yang, C. Shen, Y.C. Zhang, Z.J. Rong, X. Li, M. Dunlop, Y.H. Ma, Z.X. Liu, C.M. Carr, H. Rème, The force-free configuration of flux ropes in geomagnetotail: cluster observations. J. Geophys. Res. Space Phys. 119(8), 6327–6341 (2014). https://doi.org/10.1002/2013ja019642 S.T. Yao, Q.Q. Shi, Z.Y. Li, X.G. Wang, A.M. Tian, W.J. Sun, M. Hamrin, M.M. Wang, T. Pitkänen, S.C. Bai, X.C. Shen, X.F. Ji, D. Pokhotelov, Z.H. Yao, T. Xiao, Z.Y. Pu, S.Y. Fu, Q.G. Zong, A. De Spiegeleer, W. Liu, H. Zhang, H. Rème, Propagation of small size magnetic holes in the magnetospheric plasma sheet. J. Geophys. Res. Space Phys. 121, 5510–5519 (2016). https://doi.org/10.1002/2016JA022741 S.T. Yao, X.G. Wang, Q.Q. Shi, T. Pitkänen, M. Hamrin, Z.H. Yao, Z.Y. Li, X.F. Ji, A. De Spiegeleer, Y.C. Xiao, A.M. Tian, Z.Y. Pu, Q.G. Zong, C.J. Xiao, S.Y. Fu, H. Zhang, C.T. Russell, B.L. Giles, R.L. Guo, W.J. Sun, W.Y. Li, X.Z. Zhou, S.Y. Huang, J. Vaverka, M. Nowada, S.C. Bai, M.M. Wang, J. Liu, Observations of kinetic-size magnetic holes in the magnetosheath. J. Geophys. Res. Space Phys. 122, 1990–2000 (2017). https://doi.org/10.1002/2016JA023858 S.T. Yao, Q.Q. Shi, R.L. Guo, Z.H. Yao, A.M. Tian, A.W. Degeling, W.J. Sun, J. Liu, X.G. Wang, Q.G. Zong, H. Zhang, Z.Y. Pu, L.H. Wang, S.Y. Fu, C.J. Xiao, C.T. Russell, B.L. Giles, Y.Y. Feng, T. Xiao, S.C. Bai, X.C. Shen, L.L. Zhao, H. Liu, Magnetospheric multiscale observations of electron scale magnetic peak Geophys. Res. Lett. 45(2), 527–537 (2018). https://doi.org/10.1002/2017gl075711 Yao et al., Kinetic-scale flux rope in the magnetosheath boundary layer. J. Geophys. Res. (2019 submitted) Y.C. Zhang, C. Shen, Z.X. Liu, Z.J. Rong, T.L. Zhang, A. Marchaudon, H. Zhang, S.P. Duan, Y.H. Ma, M.W. Dunlop, Y.Y. Yang, C.M. Carr, I. Dandouras, Two different types of plasmoids in the plasma sheet: cluster multisatellite analysis application. J. Geophys. Res. Space Phys. 118(9), 5437–5444 (2013). https://doi.org/10.1002/jgra.50542 C. Zhao, C.T. Russell, R.J. Strangeway, S.M. Petrinec, W.R. Paterson, M. Zhou, B.J. Anderson, W. Baumjohann, K.R. Bromund, M. Chutter, D. Fischer, G. Le, R. Nakamura, F. Plaschke, J.A. Slavin, R.B. Torbert, H.Y. Wei, Force balance at the magnetopause determined with MMS: application to flux transfer events. Geophys. Res. Lett. 43(23), 11,941–11,947 (2016). https://doi.org/10.1002/2016gl071568 X.-Z. Zhou, Q.-G. Zong, Z.Y. Pu, T.A. Fritz, M.W. Dunlop, Q.Q. Shi, J. Wang, Y. Wei, Multiple triangulation analysis: another approach to determine the orientation of magnetic flux ropes. Ann. Geophys. 24(6), 1759–1765 (2006a). https://doi.org/10.5194/angeo-24-1759-2006 X.-Z. Zhou, Q.-G. Zong, J. Wang, Z.Y. Pu, X.G. Zhang, Q.Q. Shi, J.B. Cao, Multiple triangulation analysis: application to determine the velocity of 2-D structures. Ann. Geophys. 24(11), 3173–3177 (2006b). https://doi.org/10.5194/angeo-24-3173-2006 X.-Z. Zhou, Z.Y. Pu, Q.-G. Zong, P. Song, S.Y. Fu, J. Wang, H. Zhang, On the error estimation of multi-spacecraft timing method. Ann. Geophys. 27, 3949–3955 (2009). https://doi.org/10.5194/angeo-27-3949-2009 This work was supported by the National Natural Science Foundation of China (grants 41774153 and 41574157), and project supported by the Specialized Research Fund for State Key Laboratories, and the International Space Science Institute (ISSI). The instrumental teams of MMS and Cluster are greatly appreciated for providing magnetic field, electric field and plasma data. All the data are available from MMS Science Data Center (https://lasp.colorado.edu/mms/sdc/public/) and Cluster Science Archive (https://csa.esac.esa.int/csa-web/). MMS data access and processing was done using Space Physics Environment Data Analysis System (SPEDAS, www.spedas.org) V3.1, see Angelopoulos et al. (2019). GUI interfaces for the MVA, MDD and STD methods can now be accessed in the SPEDAS. Shandong Provincial Key Laboratory of Optical Astronomy and Solar-Terrestrial Environment, Institute of Space Sciences, Shandong University, Weihai, China Q. Q. Shi, A. M. Tian, S. C. Bai, A. W. Degeling & S. T. Yao Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, Sagamihara, Japan H. Hasegawa School of Earth and Space Sciences, Peking University, Beijing, China Z. Y. Pu, Q.-G. Zong, X.-Z. Zhou & S. Y. Fu Space Science Institute, School of Astronautics, Beihang University, Beijing, China M. Dunlop Key Laboratory of Earth and Planetary Physics, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029, China R. L. Guo & Y. Wei Sate Key Laboratory of Space Weather, Chinese Academy of Sciences, Beijing, China Z. Q. Liu Q. Q. Shi A. M. Tian S. C. Bai A. W. Degeling Z. Y. Pu R. L. Guo S. T. Yao Q.-G. Zong Y. Wei X.-Z. Zhou S. Y. Fu Correspondence to Q. Q. Shi. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Shi, Q.Q., Tian, A.M., Bai, S.C. et al. Dimensionality, Coordinate System and Reference Frame for Analysis of In-Situ Space Plasma and Field Data. Space Sci Rev 215, 35 (2019). https://doi.org/10.1007/s11214-019-0601-2 Dimensionality Dimension number Variant/invariant axis Flux rope Current sheet Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Rectangular, range, and restricted AONTs: Three generalizations of all-or-nothing transforms Classical reduction of gap SVP to LWE: A concrete security analysis On additive MDS codes over small fields Simeon Ball 1,, , Guillermo Gamboa 1, and Michel Lavrauw 2, Departament de Matemàtiques, Universitat Politècnica de Catalunya, Jordi Girona 1-3, 08034 Barcelona, Spain Faculty of Engineering and Natural Sciences, Sabancı University, Istanbul, Turkey * Corresponding author: Simeon Ball Received December 2020 Revised April 2021 Early access July 2021 Fund Project: The first author acknowledges the support of the project MTM2017-82166-P of the Spanish Ministerio de Ciencia y Innovación Table(5) Let $ C $ be a $ (n,q^{2k},n-k+1)_{q^2} $ additive MDS code which is linear over $ {\mathbb F}_q $. We prove that if $ n \geq q+k $ and $ k+1 $ of the projections of $ C $ are linear over $ {\mathbb F}_{q^2} $ then $ C $ is linear over $ {\mathbb F}_{q^2} $. We use this geometrical theorem, other geometric arguments and some computations to classify all additive MDS codes over $ {\mathbb F}_q $ for $ q \in \{4,8,9\} $. We also classify the longest additive MDS codes over $ {\mathbb F}_{16} $ which are linear over $ {\mathbb F}_4 $. In these cases, the classifications not only verify the MDS conjecture for additive codes, but also confirm there are no additive non-linear MDS codes which perform as well as their linear counterparts. These results imply that the quantum MDS conjecture holds for $ q \in \{ 2,3\} $. Keywords: MDS codes, MDS conjecture, quantum codes, additive codes, stabiliser codes, arcs. Mathematics Subject Classification: Primary: 94B27; Secondary: 51E22. Citation: Simeon Ball, Guillermo Gamboa, Michel Lavrauw. On additive MDS codes over small fields. Advances in Mathematics of Communications, doi: 10.3934/amc.2021024 T. L. Alderson, $(6, 3)$-MDS codes over an alphabet of size $4$, Des. Codes Cryptogr, 38 (2006), 31–40. doi: 10.1007/s10623-004-5659-4. Google Scholar S. Ball, On sets of vectors of a finite vector space in which every subset of basis size is a basis, J. Eur. Math. Soc., 14 (2012), 733–748. doi: 10.4171/JEMS/316. Google Scholar S. Ball and M. Lavrauw, Arcs in finite projective spaces, EMS Surv. Math. Sci., 6 (2019), 133–172. doi: 10.4171/emss/33. Google Scholar A. Betten, M. Braun, H. Fripertinger, A. Kerber, A. Kohnert and A. Wassermann, Error-Correcting Linear Codes. Classification by Isometry and Applications, Algorithms and Computation in Mathematics 18, Springer, 2006. Google Scholar A. Blokhuis and A. E. Brouwer, Small additive quaternary codes, European J. Combin., 25 (2004), 161–167. doi: 10.1016/S0195-6698(03)00096-9. Google Scholar K. Bogart, D. Goldberg and J. Gordon, An elementary proof of the MacWilliams theorem on equivalence of codes, Inform and Control, 37 (1978), 19–22. doi: 10.1016/S0019-9958(78)90389-3. Google Scholar K. A. Bush, Orthogonal arrays of index unity, Ann. Math. Statistics, 23 (1952), 426–434. doi: 10.1214/aoms/1177729387. Google Scholar P. Dembowski, Finite Geometries, Reprint of the 1968 original. Classics in Mathematics. Springer-Verlag, Berlin, 1997. Google Scholar J. Bamberg, A. Betten, Ph. Cara, J. De Beule, M. Lavrauw and M. Neunhöffer, Finite Incidence Geometry, FinInG–a GAP Package, Version 1.4.1, 2018. https://www.gap-system.org/Packages/fining.html. Google Scholar G. A. Gamboa Quintero, Additive MDS codes, Master's Thesis, Universitat Politècnica Catalunya, 2020. Google Scholar The GAP Group, GAP – Groups, Algorithms, Programming -a System for Computational Discrete Algebra, Version 4.11.0, 2020. https://www.gap-system.org. Google Scholar L. H. Soicher, GAP Package GRAPE, Version 4.8.5, 2021. https://gap-packages.github.io/grape. Google Scholar M. Grassl and M. Rötteler, Quantum MDS codes over small fields, in Proc. Int. Symp. Inf. Theory (ISIT), (2015), 1104–1108, arXiv: 1502.05267. doi: 10.1109/ISIT.2015.7282626. Google Scholar J. W. P. Hirschfeld and L. Storme, The packing problem in statistics, coding theory and finite projective spaces: Update 2001, Finite Geometries, Dev. Math., Kluwer Acad. Publ, Dordrecht, 3 (2001), 201-246. doi: 10.1007/978-1-4613-0283-4_13. Google Scholar F. Huber and M. Grassl, Quantum codes of maximal distance and highly entangled subspaces, Quantum, 4 (2020), 284, arXiv: 1907.07733. doi: 10.22331/q-2020-06-18-284. Google Scholar A. Ketkar, A. Klappenecker, S. Kumar and P. K. Sarvepalli, Nonbinary stabilizer codes over finite fields, IEEE Trans. Inf. Theory, 52 (2006), 4892-4914. doi: 10.1109/TIT.2006.883612. Google Scholar J. I. Kokkala, D. S. Krotov and P. R. J. Östergård, On the classification of MDS codes, IEEE Trans. Inf. Theory, 61 (2015), 6485–6492. doi: 10.1109/TIT.2015.2488659. Google Scholar J. I. Kokkala and P. R. J. Östergård, Further results on the classification of MDS codes, Adv. Math. Commun., 10 (2016), 489–498. doi: 10.3934/amc.2016020. Google Scholar M. Lavrauw and G. Van de Voorde, Field reduction and linear sets in finite geometry, in: Contemporary Mathematics, (eds: G Kyureghyan, GL Mullen, and A Pott), American Mathematical Society, 632 (2015), 271–293. doi: 10.1090/conm/632/12633. Google Scholar S. Linton, Finding the smallest image of a set, in: ISSAC '04: Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation, 2004 (2004), 229–234. doi: 10.1145/1005285.1005319. Google Scholar L. Lunelli and M. Sce, Considerazione aritmetiche e risultati sperimentali sui $\{K; n\}_q$-archi, Ist. Lombardo Accad. Sci. Rend. A, 98 (1964), 3-52. Google Scholar F. J. MacWilliams, Combinatorial Problems of Elementary Abelian Groups, Thesis (Ph.D.)–Radcliffe College, 1962. Google Scholar K. Shiromoto, Note on MDS codes over the integers modulo $p^{m}$,, Hokkaido Mathematical Journal, 29 (2000), 149–157. doi: 10.14492/hokmj/1350912961. Google Scholar L. H. Soicher, Computation of partial spreads web-page, http://www.maths.qmul.ac.uk/~lsoicher/partialspreads/ Google Scholar H. N. Ward and J. A. Wood, Characters and the equivalence of codes, J. Combin. Theory Ser. A, 73 (1996), 348–352. doi: 10.1016/S0097-3165(96)80011-2. Google Scholar Table 1. The classification of arcs of lines of $\mathrm{PG}(5, 2)$ number of arcs of points of $\mathrm{PG}(2, 4)$ 1 1 1 number of arcs of lines of $\mathrm{PG}(5, 2)$ 1 1 1 Table 2. The classification of arcs of planes of PG(8, 2) size 4 5 6 7 8 9 10 number of arcs of points of PG(2, 8) 1 1 3 2 2 2 1 number of arcs of planes of PG(8, 2) 1 2 4 2 2 2 1 Table 3. The classification of arcs of lines of PG(5, 3) # of arcs of points of PG(2, 9) 1 2 6 3 2 1 1 # of arcs of lines of PG(5, 3) 1 4 13 4 3 1 1 # of arcs of lines of PG(3, 3) 3 4 5 4 3 2 2 size 5 6 7 8 9 10 11 # of arcs of $\mathrm{PG}(2, 16)$ 3 22 125 865 1534 1262 300 # of line-arcs of $\mathrm{PG}(5, 4)$ 10 360 8294 15162 2869 1465 301 # of arcs of $\mathrm{PG}(2, 16)$ 159 70 30 9 5 3 2 # of line-arcs of $\mathrm{PG}(5, 4)$ 159 70 30 9 5 3 2 Can Xiang, Jinquan Luo. Some subfield codes from MDS codes. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021023 Janne I. Kokkala, Patric R. J. Östergård. Further results on the classification of MDS codes. Advances in Mathematics of Communications, 2016, 10 (3) : 489-498. doi: 10.3934/amc.2016020 Anna-Lena Horlemann-Trautmann, Alessandro Neri. A complete classification of partial MDS (maximally recoverable) codes with one global parity. Advances in Mathematics of Communications, 2020, 14 (1) : 69-88. doi: 10.3934/amc.2020006 Sara D. Cardell, Joan-Josep Climent, Daniel Panario, Brett Stevens. A construction of $ \mathbb{F}_2 $-linear cyclic, MDS codes. Advances in Mathematics of Communications, 2020, 14 (3) : 437-453. doi: 10.3934/amc.2020047 Diego Napp, Roxana Smarandache. Constructing strongly-MDS convolutional codes with maximum distance profile. Advances in Mathematics of Communications, 2016, 10 (2) : 275-290. doi: 10.3934/amc.2016005 Ilias S. Kotsireas, Christos Koukouvinos, Dimitris E. Simos. MDS and near-MDS self-dual codes over large prime fields. Advances in Mathematics of Communications, 2009, 3 (4) : 349-361. doi: 10.3934/amc.2009.3.349 Ziteng Huang, Weijun Fang, Fang-Wei Fu, Fengting Li. Generic constructions of MDS Euclidean self-dual codes via GRS codes. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021059 Padmapani Seneviratne, Martianus Frederic Ezerman. New quantum codes from metacirculant graphs via self-dual additive $\mathbb{F}_4$-codes. Advances in Mathematics of Communications, 2022 doi: 10.3934/amc.2021073 Ram Krishna Verma, Om Prakash, Ashutosh Singh, Habibul Islam. New quantum codes from skew constacyclic codes. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021028 Cem Güneri, Ferruh Özbudak, Funda ÖzdemIr. On complementary dual additive cyclic codes. Advances in Mathematics of Communications, 2017, 11 (2) : 353-357. doi: 10.3934/amc.2017028 Martianus Frederic Ezerman, San Ling, Patrick Solé, Olfa Yemen. From skew-cyclic codes to asymmetric quantum codes. Advances in Mathematics of Communications, 2011, 5 (1) : 41-57. doi: 10.3934/amc.2011.5.41 Karim Samei, Saadoun Mahmoudi. Singleton bounds for R-additive codes. Advances in Mathematics of Communications, 2018, 12 (1) : 107-114. doi: 10.3934/amc.2018006 Ettore Fornasini, Telma Pinho, Raquel Pinto, Paula Rocha. Composition codes. Advances in Mathematics of Communications, 2016, 10 (1) : 163-177. doi: 10.3934/amc.2016.10.163 Alexis Eduardo Almendras Valdebenito, Andrea Luigi Tironi. On the dual codes of skew constacyclic codes. Advances in Mathematics of Communications, 2018, 12 (4) : 659-679. doi: 10.3934/amc.2018039 Michael Braun. On lattices, binary codes, and network codes. Advances in Mathematics of Communications, 2011, 5 (2) : 225-232. doi: 10.3934/amc.2011.5.225 Evangeline P. Bautista, Philippe Gaborit, Jon-Lark Kim, Judy L. Walker. s-extremal additive $\mathbb F_4$ codes. Advances in Mathematics of Communications, 2007, 1 (1) : 111-130. doi: 10.3934/amc.2007.1.111 Helena Rifà-Pous, Josep Rifà, Lorena Ronquillo. $\mathbb{Z}_2\mathbb{Z}_4$-additive perfect codes in Steganography. Advances in Mathematics of Communications, 2011, 5 (3) : 425-433. doi: 10.3934/amc.2011.5.425 W. Cary Huffman. Additive cyclic codes over $\mathbb F_4$. Advances in Mathematics of Communications, 2008, 2 (3) : 309-343. doi: 10.3934/amc.2008.2.309 Simeon Ball Guillermo Gamboa Michel Lavrauw
CommonCrawl
Non-convex semi-infinite min-max optimization with noncompact sets An uncertain wage contract model for risk-averse worker under bilateral moral hazard Incremental gradient-free method for nonsmooth distributed optimization Jueyou Li 1, , Guoquan Li 1, , Zhiyou Wu 1, , Changzhi Wu 2, , Xiangyu Wang 2, , Jae-Myung Lee 3, and Kwang-Hyo Jung 3, School of Mathematical Sciences, Chongqing Normal University, Chongqing, 400047, China Australasian Joint Research Center for Building Information Modelling, School of Built Environment, Curtin University, Bentley, WA, 6102, Australia Department of Naval Architecture and Ocean Engineering, Pusan National University, Busan, Korea Received March 2015 Revised August 2016 Published December 2016 Fund Project: This research was partially supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) through GCRC-SOP (No. 2011-0030013), the Natural Science Foundation of China (11501070,11401064,11471062 and 61473326), by the Natural Science Foundation Projection of Chongqing (cstc2015jcyjA00011, cstc2013jjB00001 and cstc2013jcyjA00029), by the Chongqing Municipal Education Commission under Grant KJ1500301 and KJ1500302, and by the Chongqing Normal University Research Foundation 15XLB005 In this paper we consider the minimization of the sum of local convex component functions distributed over a multi-agent network. We first extend the Nesterov's random gradient-free method to the incremental setting. Then we propose the incremental gradient-free methods, including a cyclic order and a randomized order in the selection of component function. We provide the convergence and iteration complexity analysis of the proposed methods under some suitable stepsize rules. To illustrate our proposed methods, extensive numerical results on a distributed $l_1$-regression problem are presented. Compared with existing incremental subgradient-based methods, our methods only require the evaluation of the function values rather than subgradients, which may be preferred by practical engineers. Keywords: Incremental method, Gaussian smoothing, gradient-free method, convex optimization. Mathematics Subject Classification: Primary: 47N10; Secondary: 49J52. Citation: Jueyou Li, Guoquan Li, Zhiyou Wu, Changzhi Wu, Xiangyu Wang, Jae-Myung Lee, Kwang-Hyo Jung. Incremental gradient-free method for nonsmooth distributed optimization. Journal of Industrial & Management Optimization, 2017, 13 (4) : 1841-1857. doi: 10.3934/jimo.2017021 A. M. Bagirov, M. Ghosh and D. Webb, A derivative-free method for linearly constrained nonsmooth optimization, J. Ind. Manag. Optim., 2 (2006), 319-338. doi: 10.3934/jimo.2006.2.319. Google Scholar D. P. Bertsekas, Stochastic optimization problems with nondifferentiable cost functionals, J. Optim. Theory Appl., 12 (1973), 218-231. doi: 10.1007/BF00934819. Google Scholar [3] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation: Numerical Methods, Athena Scientific, Belmont, MA, 1989. Google Scholar [4] D. P. Bertsekas, A. Nedić and E. Ozdaglar, Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003. Google Scholar D. P. Bertsekas, Incremental proximal methods for large scale convex optimization, Math. Program. B., 129 (2011), 163-195. doi: 10.1007/s10107-011-0472-0. Google Scholar [6] A. R. Conn, K. Scheinberg and L. N. Vicente, Introduction to Derivative-Free Optimization, MPS-SIAM Series on Optimization, SIAM, Philadelphia, 2009. doi: 10.1137/1.9780898718768. Google Scholar J. C. Duchi, A. Agarwal and M. J. Wainwright, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Trans. Autom. Control., 57 (2012), 592-606. doi: 10.1109/TAC.2011.2161027. Google Scholar J. C. Duchi, P. L. Bartlet and M. J. Wainwrighr, Randomized smoothing for stochastic optimization, SIAM J. Optim., 22 (2012), 674-701. doi: 10.1137/110831659. Google Scholar X. X. Huang, X. Q. Yang and K. L. Teo, A smoothing scheme for optimization problems with Max-Min constraints, J. Ind. Manag. Optim., 3 (2007), 209-222. doi: 10.3934/jimo.2007.3.209. Google Scholar [10] J. Hiriart-Urruty and C. Lemarechal, Convex Analysis and Minimization Algorithms Ⅰ, Springer, Berlin, 1996. doi: 10.1007/978-3-662-02796-7. Google Scholar X. Zhang, C. Wu, J. Li, X. Wang, Z. Yang, J. M. Lee and K. H. Jung, Binary artificial algae algorithm for multidimensional knapsack problems, Applied Soft Computing, 43 (2016), 583-595. doi: 10.1016/j.asoc.2016.02.027. Google Scholar B. Johansson, M. Rabi and M. Johansson, A randomized incremental subgradient method for distributed optimization in networked systems, SIAM J. Optim., 20 (2009), 1157-1170. doi: 10.1137/08073038X. Google Scholar K. C. Kiwiel, Convergence of approximate and incremental subgradient methods for convex optimization, SIAM J. Optim., 14 (2004), 807-840. doi: 10.1137/S1052623400376366. Google Scholar J. Y. Li, C. Z. Wu, Z. Y. Wu and Q. Long, Gradient-free method for nonsmooth distributed optimization, J. Glob. Optim., 61 (2015), 325-340. doi: 10.1007/s10898-014-0174-2. Google Scholar J. Y. Li, C. Z. Wu, Z. Y. Wu, Q. Long and X. Y. Wang, Distributed proximal-gradient method for convex optimization with inequality constraints, ANZIAM J., 56 (2014), 160-178. doi: 10.1017/S1446181114000273. Google Scholar A. Nedić and D. P. Bertsekas, Convergence rate of incremental subgradient algorithm, in Stochastic Optimization: Algorithms and Applications (eds. S. Uryasev and P. M. Pardalos), Applied Optimization, 54, Springer, 2001,223-264. doi: 10.1007/978-1-4757-6594-6_11. Google Scholar A. Nedić and D. P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization, SIAM J. Optim., 12 (2001), 109-138. doi: 10.1137/S1052623499362111. Google Scholar A. Nedić and A. Ozdaglar, Distributed subgradient methods for multi-agent optimization, IEEE Trans. Autom. Control., 54 (2009), 48-61. doi: 10.1109/TAC.2008.2009515. Google Scholar Y. Nesterov, Random Gradient-Free Minimization of Convex Functions, Technical report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain, January 2011. Available from: http://www.ecore.be/DPs/dp_1297333890.pdf. doi: 10.1007/s10208-015-9296-2. Google Scholar B. T. Polyak and J. Tsypkin, Robust identification, Automatica, 16 (1980), 53-63. doi: 10.1016/0005-1098(80)90086-2. Google Scholar M. G. Rabbat and R. D. Nowak, Quantized incremental algorithms for distributed optimization, IEEE J. Sel. Areas Commun., 23 (2005), 798-808. doi: 10.1109/JSAC.2005.843546. Google Scholar S. S. Ram, A. Nedić and V. V. Veeravalli, Incremental stochastic subgradient algorithms for convex optimization, SIAM J. Optim., 20 (2009), 691-717. doi: 10.1137/080726380. Google Scholar Q. J. Shi, C. He and L. G. Jiang, Normalized incremental subgradient algorithm and its application, IEEE Signal Processing, 57 (2009), 3759-3774. doi: 10.1109/TSP.2009.2024901. Google Scholar R. L. Sheu, M. J. Ting and I. L. Wang, Maximum folw problem in the distribution network, J. Ind. Manag. Optim., 2 (2006), 237-254. doi: 10.3934/jimo.2006.2.237. Google Scholar M. V. Solodov, Incremental gradient algorithms with stepsizes bounded away from zero, Comput. Optim. Appl., 11 (1998), 28-35. doi: 10.1023/A:1018366000512. Google Scholar D. M. Yuan, S. Y. Xu and J. W. Lu, Gradient-free method for distributed multi-agent optimization via push-sum algorithms, Int. J. Robust Nonlinear Control, 25 (2015), 1569-1580. doi: 10.1002/rnc.3164. Google Scholar Q. Long and C. Wu, A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization, J. Ind. Manag. Optim., 10 (2014), 1279-1296. doi: 10.3934/jimo.2014.10.1279. Google Scholar G. H. Yu, A derivative-free method for solving large-scale nonlinear systems of equations, J. Ind. Manag. Optim., 6 (2010), 149-160. doi: 10.3934/jimo.2010.6.149. Google Scholar C. J. Yu, K. L. Teo, L. S. Zhang and Y. Q. Bai, A new exact penalty function method for continuous inequality constrained optimization problems, J. Ind. Manag. Optim., 6 (2010), 895-910. doi: 10.3934/jimo.2010.6.895. Google Scholar F. Yousefian, A. Nedić and U. V. Shanbhag, On stochastic gradient and subgradient methods with adaptive steplength sequences, Automatica, 48 (2012), 56-67. doi: 10.1016/j.automatica.2011.09.043. Google Scholar J. Li, G. Chen, Z. Dong and Z. Wu, A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints, Comp. Opt. Appl., 64 (2016), 671-697. doi: 10.1007/s10589-016-9826-0. Google Scholar Figure 1. Function value error versus number of cycles $K$ with a constant stepsize $\alpha=0.001$ for algorithms CIGF and CISG Figure 2. Function value error versus number of iterations $N$ with a constant stepsize $\alpha=0.001$ for algorithms RIGF and RISG Figure 3. (a) Function value error versus number of cycles $K$ with diminishing stepsize choices: $\alpha_{1}(k)={1}/{(m(k-1)+i)}, ~ \alpha_{2}(k)={1}/{(m(k-1)+i)^{\frac{2}{3}}}, k=0, 1, \ldots, i=1, \ldots, m$; (b) Function value error versus number of iterations $N$ with diminishing stepsize choices: $\theta_{1}(k)={1}/{k}, ~\theta_{2}(k)={0.1}/{k^{\frac{2}{3}}}, k=1, 2, \ldots$ Figure 4. For a fixed target accuracy $\epsilon=0.01$ and a constant stepsize $\alpha=0.001$, comparisons between algorithms CIGF and CISG: (a) number of iterations $N$ versus dimensions of the agent $d$ for fixed $m=100$; (b) number of iterations $N$ versus number of agents $m$ for fixed $d=2$ Igor Griva, Roman A. Polyak. Proximal point nonlinear rescaling method for convex optimization. Numerical Algebra, Control & Optimization, 2011, 1 (2) : 283-299. doi: 10.3934/naco.2011.1.283 Nobuko Sagara, Masao Fukushima. trust region method for nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2005, 1 (2) : 171-180. doi: 10.3934/jimo.2005.1.171 Samuel Amstutz, Antonio André Novotny, Nicolas Van Goethem. Minimal partitions and image classification using a gradient-free perimeter approximation. Inverse Problems & Imaging, 2014, 8 (2) : 361-387. doi: 10.3934/ipi.2014.8.361 Zhongwen Chen, Songqiang Qiu, Yujie Jiao. A penalty-free method for equality constrained optimization. Journal of Industrial & Management Optimization, 2013, 9 (2) : 391-409. doi: 10.3934/jimo.2013.9.391 Gaohang Yu, Shanzhou Niu, Jianhua Ma. Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints. Journal of Industrial & Management Optimization, 2013, 9 (1) : 117-129. doi: 10.3934/jimo.2013.9.117 Nam-Yong Lee, Bradley J. Lucier. Preconditioned conjugate gradient method for boundary artifact-free image deblurring. Inverse Problems & Imaging, 2016, 10 (1) : 195-225. doi: 10.3934/ipi.2016.10.195 Dan Li, Li-Ping Pang, Fang-Fang Guo, Zun-Quan Xia. An alternating linearization method with inexact data for bilevel nonsmooth convex optimization. Journal of Industrial & Management Optimization, 2014, 10 (3) : 859-869. doi: 10.3934/jimo.2014.10.859 Foxiang Liu, Lingling Xu, Yuehong Sun, Deren Han. A proximal alternating direction method for multi-block coupled convex optimization. Journal of Industrial & Management Optimization, 2019, 15 (2) : 723-737. doi: 10.3934/jimo.2018067 Bingsheng He, Xiaoming Yuan. Linearized alternating direction method of multipliers with Gaussian back substitution for separable convex programming. Numerical Algebra, Control & Optimization, 2013, 3 (2) : 247-260. doi: 10.3934/naco.2013.3.247 Guanghui Zhou, Qin Ni, Meilan Zeng. A scaled conjugate gradient method with moving asymptotes for unconstrained optimization problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 595-608. doi: 10.3934/jimo.2016034 El-Sayed M.E. Mostafa. A nonlinear conjugate gradient method for a special class of matrix optimization problems. Journal of Industrial & Management Optimization, 2014, 10 (3) : 883-903. doi: 10.3934/jimo.2014.10.883 Sanming Liu, Zhijie Wang, Chongyang Liu. Proximal iterative Gaussian smoothing algorithm for a class of nonsmooth convex minimization problems. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 79-89. doi: 10.3934/naco.2015.5.79 A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319 René Henrion. Gradient estimates for Gaussian distribution functions: application to probabilistically constrained optimization problems. Numerical Algebra, Control & Optimization, 2012, 2 (4) : 655-668. doi: 10.3934/naco.2012.2.655 Dongsheng Yin, Min Tang, Shi Jin. The Gaussian beam method for the wigner equation with discontinuous potentials. Inverse Problems & Imaging, 2013, 7 (3) : 1051-1074. doi: 10.3934/ipi.2013.7.1051 Rouhollah Tavakoli, Hongchao Zhang. A nonmonotone spectral projected gradient method for large-scale topology optimization problems. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 395-412. doi: 10.3934/naco.2012.2.395 Liang Zhang, Wenyu Sun, Raimundo J. B. de Sampaio, Jinyun Yuan. A wedge trust region method with self-correcting geometry for derivative-free optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 169-184. doi: 10.3934/naco.2015.5.169 Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010 Stefan Kindermann. Convergence of the gradient method for ill-posed problems. Inverse Problems & Imaging, 2017, 11 (4) : 703-720. doi: 10.3934/ipi.2017033 Daniela Saxenhuber, Ronny Ramlau. A gradient-based method for atmospheric tomography. Inverse Problems & Imaging, 2016, 10 (3) : 781-805. doi: 10.3934/ipi.2016021 Jueyou Li Guoquan Li Zhiyou Wu Changzhi Wu Xiangyu Wang Jae-Myung Lee Kwang-Hyo Jung
CommonCrawl
Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity DCDS Home Perturbed elliptic equations with oscillatory nonlinearities October 2012, 32(10): 3587-3620. doi: 10.3934/dcds.2012.32.3587 Transport, flux and growth of homoclinic Floer homology Sonja Hohloch 1, Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540, United States Received February 2011 Revised April 2012 Published May 2012 We point out an interesting relation between transport in Hamiltonian dynamics and Floer homology. We generalize homoclinic Floer homology from $\mathbb{R}^2$ and closed surfaces to two-dimensional cylinders. The relative symplectic action of two homoclinic points is identified with the flux through a turnstile (as defined in MacKay & Meiss & Percival [19]) and Mather's [20] difference in action $\Delta W$. The Floer boundary operator is shown to annihilate turnstiles and we prove that the rank of certain filtered homology groups and the flux grow linearly with the number of iterations of the underlying symplectomorphism. Keywords: growth, Floer homology, flux, two dimensional symplectic dynamical systems., homoclinic points. Mathematics Subject Classification: Primary: 37J05, 37J10, 37J45, 53D4. Citation: Sonja Hohloch. Transport, flux and growth of homoclinic Floer homology. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3587-3620. doi: 10.3934/dcds.2012.32.3587 S. Aubry, P. Le Daeron and G. André, Classical ground-states of one-dimensional models for incommensurate structures,, unpublished preprint, (1982). Google Scholar G. D. Birkhoff, Nouvelles recherches sur les systèmes dynamiques,, Mem. Pont. Acad. Sci. Nov. Lyncaei, 53 (1935), 85. Google Scholar Y. Chekanov, Differential algebra of Legendrian links,, Invent. Math., 150 (2002), 441. doi: 10.1007/s002220200212. Google Scholar C. C. Conley and E. Zehnder, The Birkhoff-Lewis fixed point theorem and a conjecture of V. I. Arnol'd,, Invent. Math., 73 (1983), 33. doi: 10.1007/BF01393824. Google Scholar S. de Silva, "Products in the Symplectic Floer Homology of Lagrangian Intersections,", Thesis, (1998). Google Scholar A. Fathi, Solutions KAM faibles conjuguées et barrières de Peierls,, C. R. Acad. Sci. Paris Sér. I Math., 325 (1997), 649. doi: 10.1016/S0764-4442(97)84777-5. Google Scholar A. Fathi, Orbites hétéroclines et ensemble de Peierls,, C. R. Acad. Sci. Paris Sér. I Math., 326 (1998), 1213. doi: 10.1016/S0764-4442(98)80230-9. Google Scholar A. Floer, A relative Morse index for the symplectic action,, Comm. Pure Appl. Math., 41 (1988), 393. doi: 10.1002/cpa.3160410402. Google Scholar A. Floer, The unregularized gradient flow of the symplectic action,, Comm. Pure Appl. Math., 41 (1988), 775. doi: 10.1002/cpa.3160410603. Google Scholar A. Floer, Morse theory for Lagrangian intersections,, J. Diff. Geom., 28 (1988), 513. Google Scholar R. Gautschi, J. Robbin and D. Salamon, Heegard splittings and Morse-Smale flows,, Int. J. Math. Math. Sci., 2003 (2003), 3539. Google Scholar V. Gelfreich, A proof of the exponentially small transversality of the separatrices for the standard map,, Comm. Math. Phys., 201 (1999), 155. doi: 10.1007/s002200050553. Google Scholar V. Gelfreich and C. Simó, High-precision computations of divergent asymptotic series and homoclinic phenomena,, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 511. doi: 10.3934/dcdsb.2008.10.511. Google Scholar V. Ginzburg, The Conley conjecture,, Ann. of Math. (2), 172 (2010), 1127. doi: 10.4007/annals.2010.172.1129. Google Scholar V. Ginzburg and B. Gürel, Action and index spectra and periodic orbits in Hamiltonian dynamics,, Geometry & Topology, 13 (2009), 2745. doi: 10.2140/gt.2009.13.2745. Google Scholar S. Hohloch, Homoclinic points and Floer homology,, preprint., (). Google Scholar S. Hohloch, Floer homology and homoclinic dynamics,, preprint., (). Google Scholar V. Lazutkin, Splitting of separatrices for the Chirikov standard map,, Translated from the Russian and with a preface by V. Gelfreich. Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI), 300 (2003), 25. Google Scholar R. MacKay, J. Meiss and I. Percival, Transport in Hamiltonian systems,, Physica D, 13 (1984), 55. doi: 10.1016/0167-2789(84)90270-7. Google Scholar J. Mather, A criterion for the nonexistence of invariant circles,, Inst. Hautes Études Sci. Publ. Math., 63 (1986), 153. doi: 10.1007/BF02831625. Google Scholar J. Mather, Modulus of continuity for Peierls's barrier,, in, 209 (1987), 177. Google Scholar D. McDuff and D. Salamon, "Introduction to Symplectic Topology,", Second edition, (1998). Google Scholar J. Palis, On Morse-Smale dynamical systems,, Topology, 8 (1969), 385. Google Scholar H. Poincaré, Sur le problème des trois corps et les équations de la dynamique,, Acta Mathematica, 13 (1890), 1. Google Scholar H. Poincaré, Les méthodes nouvelles de la méchanique céleste,, Gauthier-Villars et fils, (1899). Google Scholar L. Polterovich, On transport in dynamical systems,, (Russian), 43 (1988), 207. Google Scholar L. Polterovich, "The Geometry of the Group of Symplectic Diffeomorphism,", Lectures in Mathematics ETH Zürich, (2001). Google Scholar L. Polterovich, Growth of maps, distortion of groups and symplectic geometry,, Inv. Math., 150 (2002), 655. doi: 10.1007/s00222-002-0251-x. Google Scholar L. Polterovich, Floer homology, dynamics and groups,, in, 217 (2006), 417. Google Scholar J. Robbin, Heegard splittings and Floer homology,, preprint, (2000). Google Scholar V. Rom-Kedar, Homoclinic tangles-classification and applications,, Nonlinearity, 7 (1994), 441. Google Scholar V. Rom-Kedar, Secondary homoclinic bifurcation theorems,, Chaos, 5 (1995), 385. Google Scholar D. Salamon, Lectures on Floer homology,, in, 7 (1999), 143. Google Scholar M. Schwarz, On the action spectrum for closed symplectically aspherical manifolds,, Pacific J. of Math., 193 (2000), 419. doi: 10.2140/pjm.2000.193.419. Google Scholar S. Smale, A structurally stable differentiable homeomorphism with an infinite number of periodic points,, in, (1963), 365. Google Scholar S. Smale, Diffeomorphisms with many periodic points,, in, (1965), 63. Google Scholar Chen-Chang Peng, Kuan-Ju Chen. Existence of transversal homoclinic orbits in higher dimensional discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1181-1197. doi: 10.3934/dcdsb.2010.14.1181 Peter Albers, Urs Frauenfelder. Spectral invariants in Rabinowitz-Floer homology and global Hamiltonian perturbations. Journal of Modern Dynamics, 2010, 4 (2) : 329-357. doi: 10.3934/jmd.2010.4.329 Michael Usher. Floer homology in disk bundles and symplectically twisted geodesic flows. Journal of Modern Dynamics, 2009, 3 (1) : 61-101. doi: 10.3934/jmd.2009.3.61 Peter Albers, Urs Frauenfelder. Floer homology for negative line bundles and Reeb chords in prequantization spaces. Journal of Modern Dynamics, 2009, 3 (3) : 407-456. doi: 10.3934/jmd.2009.3.407 Jacobo Pejsachowicz, Robert Skiba. Topology and homoclinic trajectories of discrete dynamical systems. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1077-1094. doi: 10.3934/dcdss.2013.6.1077 Denis de Carvalho Braga, Luis Fernando Mello, Carmen Rocşoreanu, Mihaela Sterpu. Lyapunov coefficients for non-symmetrically coupled identical dynamical systems. Application to coupled advertising models. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 785-803. doi: 10.3934/dcdsb.2009.11.785 C. M. Evans, G. L. Findley. Analytic solutions to a class of two-dimensional Lotka-Volterra dynamical systems. Conference Publications, 2001, 2001 (Special) : 137-142. doi: 10.3934/proc.2001.2001.137 W.-J. Beyn, Y.-K Zou. Discretizations of dynamical systems with a saddle-node homoclinic orbit. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 351-365. doi: 10.3934/dcds.1996.2.351 Tiantian Wu, Xiao-Song Yang. A new class of 3-dimensional piecewise affine systems with homoclinic orbits. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5119-5129. doi: 10.3934/dcds.2016022 Zhihong Xia. Homoclinic points and intersections of Lagrangian submanifold. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 243-253. doi: 10.3934/dcds.2000.6.243 Paulo Rabelo. Elliptic systems involving critical growth in dimension two. Communications on Pure & Applied Analysis, 2009, 8 (6) : 2013-2035. doi: 10.3934/cpaa.2009.8.2013 Joachim von Below, Gaëlle Pincet Mailly, Jean-François Rault. Growth order and blow up points for the parabolic Burgers' equation under dynamical boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 825-836. doi: 10.3934/dcdss.2013.6.825 Boris Paneah. Noncommutative dynamical systems with two generators and their applications in analysis. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1411-1422. doi: 10.3934/dcds.2003.9.1411 Sergey A. Denisov. Infinite superlinear growth of the gradient for the two-dimensional Euler equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 755-764. doi: 10.3934/dcds.2009.23.755 Jun Wang, Junxiang Xu, Fubao Zhang. Homoclinic orbits for superlinear Hamiltonian systems without Ambrosetti-Rabinowitz growth condition. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1241-1257. doi: 10.3934/dcds.2010.27.1241 Fang-Di Dong, Wan-Tong Li, Li Zhang. Entire solutions in a two-dimensional nonlocal lattice dynamical system. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2517-2545. doi: 10.3934/cpaa.2018120 Jong-Shenq Guo, Chang-Hong Wu. Front propagation for a two-dimensional periodic monostable lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2010, 26 (1) : 197-223. doi: 10.3934/dcds.2010.26.197 Piotr Biler, Elio E. Espejo, Ignacio Guerra. Blowup in higher dimensional two species chemotactic systems. Communications on Pure & Applied Analysis, 2013, 12 (1) : 89-98. doi: 10.3934/cpaa.2013.12.89 Wenxiang Sun, Yun Yang. Hyperbolic periodic points for chain hyperbolic homoclinic classes. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3911-3925. doi: 10.3934/dcds.2016.36.3911 Sonja Hohloch
CommonCrawl
How could I contain plasma for use in weapons? [duplicate] Plausible plasma weapons? 9 answers In this sci-fi world, plasma weapons are somewhat abundant in special forces, as their ammunition can be stored in smaller spaces, and in large quantities, but I ran into a problem regarding confinement: We need a way of confinement that: ...can be created quickly. ...is capable of surviving speeds and acceleration up to 2 km/s. ...can hold plasma under higher pressures. ...is able to confine plasma with minimal losses. ...can be shaped into a bullet Is such containment possible? N.S.F.A.Q: The intended use of this weapon is to have at least enough power to, for example, scramble or disable a random part of a Sovijet WW2 Tank, in one shot (e.g: the cannon, if struck at a sensitive point). The containing should last for 10 secs, max (Water bombs are contradictory, because they contain water, but are also weapons) The More You Know! ... Randy Curry and his friends were able to fire a self-contained ring of plasma in open air, that lasted for 10 milliseconds. science-based engineering MephistophelesMephistopheles marked as duplicate by sphennings, apaul, ckersch, Mołot, dot_Sp0T May 30 '17 at 20:01 $\begingroup$ What's the intended of use of the few milligrams of plasma confined within the bullet? Maybe the same function might be achieved some other way. $\endgroup$ – AlexP May 30 '17 at 14:21 $\begingroup$ "Containment" and "weapon" are typically contradictory. ;) $\endgroup$ – Draco18s May 30 '17 at 14:59 $\begingroup$ Does it have to specifically contain plasma? Or can it simply generate plasma on discharge? $\endgroup$ – Chris M. May 30 '17 at 15:42 $\begingroup$ @ChrisM No, it must contain the plasma. $\endgroup$ – Mephistopheles May 30 '17 at 15:45 $\begingroup$ What is "NSFAQ? $\endgroup$ – JDługosz May 30 '17 at 21:17 This answer (given technological development to improve power efficiency) meets all but two of your requirements...but (potentially) bypasses their necessity. The two requirements being 'shaped into a bullet' and 'survive for 10 seconds.' This is actually a design for a true plasma weapon that fires a bolt of magnetically contained plasma. Yes (given technological development) The USAF has already developed a plasma weapon that meets all but two of your requirements; but, it also bypasses their necessity. Project MARAUDER Developed by Phillips Laboratory. MARAUDER was a United States Air Force experiment to develop long-range plasma weapons. It revolved around firing electromagnetically contained plasma out of a railgun encased in a toroidal (donut shaped) container. Little is known about it after 1993 because it swiftly Classified Status. We don't know if it ever went anywhere. Their design for plasma containment is called a Compact Toroid, and their design was similar to an arrangement called a Spheromak. Basically, it looks like a donut. What's interesting about a Compact Toroid, however, is that they are capable of maintaining structural stability for a fraction of a second after release from the external magnetic fields that created it. In the case of MAURAUDER, the stability time was about $1 {\mu} s$. This was the project's biggest issue. MARAUDER took these Compact Toroids and fired them from a railgun. Because the projectile was light-weight and magnetically strong, the prototype was able to accelerate a Compact Toroid of plasma at $10^{10}g$ (or about $98,100,000,000 m/s^2$). The resulting projectile moved at $3,000 km/s$ and struck its target with a $\approx 5$ lb TNT explosion ($5 \times 10^{-7} kT = 2.092MJ$) ---and a potent, short-range electromagnetic pulse. MARAUDER estimated that by the year 2000, they could increase the shot velocity to $10,000km/s$. Before that could happen, MARAUDER vanished into Classified Status. It is generally believed that MARAUDER was eventually scrapped. Because even with those insane speeds, their 'Plasma Bolt' didn't live long enough to achieve a useful range ($3,000 km/s \times 1 \mu s = 3m$). However, in recent years, new research has shown promise in extending the lifespan of these Toroids. Some believe that we may not be far from getting containment times measured in Milliseconds ($ms$). And a plasma bolt with a lifespan of a millisecond, moving at 10,000km/s has a range of 10km. That's not bad at all. Still not quite good enough for the satellite-based weapon that MARAUDER was meant to be...but pretty solid for ground-based combat. Modern Science generally acknowledges that a donut-shape is the best shape for containing Plasma. We're still messing with the specifics of the donut (how tall is it, how wide is it, how big is the hole, how much plasma do we shove in, how hot should the plasma be, etc), but the basic consensus is that Toroids are the ideal shape for a self-sustaining plasma bolt. They work a lot like a Smoke Ring does. Naturally, in science fiction it would be very easy to explain that scientists discovered a more stable Toroidial configuration that lasts...well...however long you need it to last for the sake of the story. And it remains grounded in science. $\begingroup$ You should add citations for some of the claims being made. (10^10 gs, 1 μs, etc). $\endgroup$ – KareemElashmawy May 30 '17 at 20:35 $\begingroup$ @KareemElashmawy The vast bulk of what I said can be found in the wiki article I linked at the very start of the post. The rest can be found here: en.wikipedia.org/wiki/Shiva_Star $\endgroup$ – guildsbounty May 30 '17 at 20:41 $\begingroup$ Yiur MathJax is formatting units as variables. $\endgroup$ – JDługosz May 30 '17 at 21:14 $\begingroup$ @JDługosz Thanks for calling that out...that was edited in by someone else, I fixed it. $\endgroup$ – guildsbounty May 30 '17 at 21:21 $\begingroup$ I still see them. Most don't need TeX formatting at all, if you know how to type × and µ. $\endgroup$ – JDługosz May 30 '17 at 21:24 Plasma is not necessarily hard to confine; for example, those of us who are old enough or who have an interest in the history of computing remember the beautiful Nixie tubes which were used for the displays of computers and calculators made until the 1980s or so; Nixie tubes can be ruggedized, and some have flown in space aboard Soviet (and maybe also American) spacecraft. And everybody likes the alluring neon lights which enliven nights and hide the stars. (Picture of a Nixie tube by Hellbus. Picture of a historical neon discharge tube by Pslawinski. Both are from Wikipedia. The glowing light is emitted by plasma.) Plasma is to be found in all gas discharge lamps while in operation; it's not magical, it is an ordinary state of matter. AlexPAlexP Plasma requires constant electrical energy input to be maintained, so storing your plasma bullets is the biggest constraint, barring some sort of Infinite Power Supply. Other than that, you're essentially constructing bullet-sized fluorescent tubes with some sort of attached supercapacitor that can provide ionization voltages for ~10s. Glass, perhaps jacketed in something more durable like ceramic, could contain the plasma apparatus. Some handwaving of high-capacity, high-voltage, extremely compact and inexpensive supercapacitors will be needed. To produce plasma-cutter-like effects on armored vehicles, which I assume is the goal, is slightly different, as the plasma in the above bullet won't last very long after the bullet impacts and the power source is (presumably) destroyed. A transferred plasma cutter works by inducing a plasma in a carrier gas, which creates a very conductive path between the electrode and the "target" material, which creates a tremendous amount of heat. To recreate this in a bullet, you need: A small, extremely-highly-compressed canister of carrier gas A compact, high-voltage capacitor An electrode embedded inside the bullet A bullet design that becomes a nozzle on impact (hollow- The idea here is that the bullet is fired not containing plasma, but everything needed to make plasma. On impact, the electrode is pushed back into the power supply and the carrier gas is released through the nozzle created on impact. The electrode strikes a plasma arc to the target, creating a conductive path and emptying the capacitor into the target. This creates your localized heat for as long as the arc can be maintained, greatly weakening or destroying the material at the point of impact. I'm not sure if this would be able to disable a tank with a single shot, but it is scalable assuming you have this kind of super-capacitive tech. A bigger projectile yields a higher payload, so a rifle-mounted tank buster could be devised. Chris M.Chris M. $\begingroup$ Wouldn't this approach only create​ plasma for a tiny fraction of a second? More or less a tiny flash as the projectile is destroyed? $\endgroup$ – apaul May 30 '17 at 17:53 $\begingroup$ @ apaul34208 Which approach? $\endgroup$ – Chris M. May 30 '17 at 17:54 $\begingroup$ The approach of creating the plasma on contact $\endgroup$ – apaul May 30 '17 at 17:55 $\begingroup$ @apaul34208 It depends on how the bullet is constructed and how "super" your supercapacitor is. But whether you store plasma inside a bullet or create it from parts of the bullet, you're not going to induce plasma for very long with a projectile. At least with the second approach, you can store your bullets normally. $\endgroup$ – Chris M. May 30 '17 at 18:00 $\begingroup$ You'd have to get it to impact and either lightly penetrate or adhere to the surface to sustain the arc for any appreciable length of time, yes. Not to mention the fact that all the precision engineering in a plasma torch would be thrown out the window. But the OP specifically asked for "plasma contained in bullets". That's the best I could come up with, assuming he intended the plasma to cause damage. Plasma just doesn't last very long outside of carefully controlled scenarios. $\endgroup$ – Chris M. May 30 '17 at 18:15 Not the answer you're looking for? Browse other questions tagged science-based engineering or ask your own question. Plausible plasma weapons? Is it possible for a spheromak to maintain itself at high speeds under atmospheric conditions? Handheld weapons, plasma vs. laser Why would intelligent zombies not use weapons Why would giant mechs use melee weapons? Plasma assisted cutting weapons Could Fluorine from plasma impart further damage? Could I control plasma from a distance? Could plasma "blow away" bullets? How could you make a cat's tail prehensile enough to use weapons? What weapons would nanomachines use?
CommonCrawl
Knaster's condition In mathematics, a partially ordered set P is said to have Knaster's condition upwards (sometimes property (K)) if any uncountable subset A of P has an upwards-linked uncountable subset. An analogous definition applies to Knaster's condition downwards. The property is named after Polish mathematician Bronisław Knaster. Knaster's condition implies the countable chain condition (ccc), and it is sometimes used in conjunction with a weaker form of Martin's axiom, where the ccc requirement is replaced with Knaster's condition. Not unlike ccc, Knaster's condition is also sometimes used as a property of a topological space, in which case it means that the topology (as in, the family of all open sets) with inclusion satisfies the condition. Furthermore, assuming MA($\omega _{1}$), ccc implies Knaster's condition, making the two equivalent. References • Fremlin, David H. (1984). Consequences of Martin's axiom. Cambridge tracts in mathematics, no. 84. Cambridge: Cambridge University Press. ISBN 0-521-25091-9.
Wikipedia
Infinite monkey theorem in popular culture The infinite monkey theorem and its associated imagery is considered a popular and proverbial illustration of the mathematics of probability, widely known to the general public because of its transmission through popular culture rather than because of its transmission via the classroom.[1] However, this popularity as either presented to or taken in the public's mind often oversimplifies or confuses important aspects of the different scales of the concepts involved: infinity, probability, and time—all of these are in measures beyond average human experience and practical comprehension or comparison. Popularity The history of the imagery of "typing monkeys" dates back at least as far as Émile Borel's use of the metaphor in his essay in 1913, and this imagery has recurred many times since in a variety of media. • The Hoffmann and Hofmann paper (2001) referenced a collection compiled by Jim Reeds, titled "The Parable of the Monkeys – a.k.a. The Topos of the Monkeys and the Typewriters".[2] • The enduring, widespread and popular nature of the knowledge of the theorem was noted in a 2001 paper, "Monkeys, Typewriters and Networks – the Internet in the Light of the Theory of Accidental Excellence". In their introduction to that paper, Hoffmann and Hofmann stated: "The Internet is home to a vast assortment of quotations and experimental designs concerning monkeys and typewriters. They all expand on the theory […] that if an infinite number of monkeys were left to bang on an infinite number of typewriters, sooner or later they would accidentally reproduce the complete works of William Shakespeare (or even just one of his sonnets)."[3] • In 2002, a Washington Post article said: "Plenty of people have had fun with the famous notion that an infinite number of monkeys with an infinite number of typewriters and an infinite amount of time could eventually write the works of Shakespeare".[4] • In 2003, an Arts Council funded experiment involving real monkeys and a computer keyboard received widespread press coverage.[5] • In 2007, the theorem was listed by Wired magazine in a list of eight classic thought experiments.[6] • Another study of the history was published in the introduction to a study published in 2007 by Terry Butler, "Monkeying Around with Text".[7] Today, popular interest in the typing monkeys is sustained by numerous appearances in literature, television and radio, music, and the Internet, as well as graphic novels and stand-up comedy routines. Several collections of cultural references to the theorem have been published. The following thematic timelines are based on these existing collections. The timelines are not comprehensive – instead, they document notable examples of references to the theorem appearing in various media.[8] The initial timeline starts with some of the early history following Borel, and the later timelines record examples of the history, from the stories by Maloney and Borges in the 1940s, up to the present day. Early history • 1913 – Émile Borel’s essay – “Mécanique Statistique et Irréversibilité”. • 1928 – Arthur Eddington’s book – The Nature of the Physical World • 1931 – James Jeans' book – The Mysterious Universe • 1939 – Jorge Luis Borges' essay – “The Total Library” Literature • 1940 – In "Inflexible Logic" by Russell Maloney, a short story that appeared in The New Yorker, the protagonist felt that his wealth put him under an obligation to support the sciences, and so he tested the theory. His monkeys immediately set to work typing, without error, classics of fiction and nonfiction. The rich man was amused to see unexpurgated versions of Samuel Pepys' diaries, of which he owned only a copy of a bowdlerised edition.[9][10] • 1969 – "Uncollected Works", a short story by Lin Carter, describes a machine that rapidly simulates the infinite monkeys with the result that it generates the sum total of human writing from first principles, and onward into the future. • 1970 – A humorous short story by R. A. Lafferty, "Been a Long, Long Time" (Fantastic, December), tells the story of an angel who is punished by having to supervise (for trillions of years) randomly typing monkeys who are attempting to produce a perfect copy of the collected works of Shakespeare.[11] • 1979 – Chapter XXIII of The Neverending Story by Michael Ende describes a city full of people that have lost their memories, overseen by a monkey. The monkey entertains these people by letting them play a game of dice with letters on them. The monkey explains that since these people have lost their capability to write stories themselves, the game will make it possible for them to produce sentences, stories, or poems by pure chance. Eventually this game would produce any story ever told, including the Neverending Story itself. • 1987 – In the one-act play Words, Words, Words by David Ives, three monkeys named Milton, Swift, and Kafka have been confined to a cage by a Dr Rosenbaum, who has the hypothesis: "Three monkeys hitting keys at random on typewriters for an infinite amount of time will almost surely produce Hamlet." The play's humour mainly involves literary references, including moments when the random typing produces passages from great works of literature. The play premiered in January 1987, and is still being performed almost 30 years later.[12][13] • 1996 – In Jim Cowan's short story "The Spade of Reason" (published in Century 4, 1996), the main character seeks to find meaning in the universe through text randomly generated through various means; the original program he uses to do so is something he dubs the "Motorola Monkey".* • 2001 – In Fooled by Randomness, Nassim Nicholas Taleb used it as an example of the role of randomness. • 2023 – In The New Yorker January 16, 2023 "Shouts & Murmurs" Department, a hilarious one-page satirical take-off entitled "The Infinite-Monkey Theorem: Field Notes" by Reuven Perlman. Here, an embedded reporter monitors "[m]onkeys and typewriters as far as the eye can see" for most of December 2022 to eventually report: "Monkey No. 7160043--nicknamed Coco--experienced a 90-minute burst of creative energy and has successfully and independently written the entirety of Shakespeare's Hamlet! The theorem has been confirmed!" Coco is ecstatic until she starts to reread the manuscript "with a furrowed brow," and then lights her copy of Hamlet on fire and announces her retirement from writing, and then her plans to apply to grad school in the fall. The monkeys then resume their unproductivity and humorous self-distractions to avoid the task of writing. Film • 1959 – In the film On the Beach, when wireless Morse code signals are detected by radio operators on the submarine, the commander mentions "the old story about an infinite number of monkeys and an infinite number of typewriters", and that "one of them has to end up writing King Lear". • 2021– The theorem was mentioned in the film The Boss Baby: Family Business Radio and television • 1978 – In his radio play, The Hitchhiker's Guide to the Galaxy, Douglas Adams invoked the theorem to illustrate the power of the ‘Infinite Improbability Drive’ that powered a spaceship. From Episode 2: "Ford, there’s an infinite number of monkeys outside who want to talk to us about this script for Hamlet they’ve worked out". • 1983 – In the Doctor Who episode "Mawdryn Undead", the Doctor mentions the theorem in passing (quoting it as "a treeful of monkeys"), stating to Tegan that "you and I both know, at the end of a millennium they'd still be tapping out gibberish." Tegan's response: "And you'd be tapping it out right alongside them." • 1993 – In The Simpsons episode "Last Exit to Springfield", Montgomery Burns has his own room with 1000 monkeys at typewriters, one of which he chastises for mistyping a word in the opening sentence of A Tale of Two Cities: "'It was the best of times, it was the blurst of times?' You stupid monkey!"[14][15] • 1998 – An advertisement for Molson Canadian beer depicts an array of typing chimpanzees filling a seemingly endless cathedral-like structure while a voice-over sardonically asks "Could an infinite number of monkeys on an infinite number of typewriters eventually define what it is to be Canadian?"[16] • 1998 – In the "Battle of the Sexists" episode of That '70s Show, Eric Forman yells after his girlfriend Donna Pinciotti scored during a game of basketball: "Pinciotti actually scores! Hell freezes over! A monkey types Hamlet!" • 1998 – In the animated series Fat Dog Mendoza, one of the series main villains, Doctor Rectangle, keeps a basement full of monkeys typing away on typewriters. The running gag is that Doctor Rectangle mistakenly believes that he can directly and practically use the infinite monkey theorem, using real monkeys and typewriters, to create a great work of literature or come up with a plan that will make him famous and/or powerful. It is also believed that, unbeknown to Doctor Rectangle, the monkeys are in fact very intelligent and just type things at random to amuse themselves and receive a steady income of bananas. • 1999 – The infinite monkey theorem is the subject of a brief sketch in the Histeria! episode "Super Writers". • 1999 – "A Troo Storee", an episode of I Am Weasel, features a large room filled with several types of monkeys with typewriters who are working on a novel. When Weasel tries to pay them in bananas, they consider it an insult and quit their job, all except for Baboon.[17] • 2000 – In the Family Guy episode "The King is Dead", Lois questions Peter's creativity, to which he replies: "Oh, art-schmart. Put enough monkeys in a room with a typewriter they'll produce Shakespeare." The scene then cuts to several monkeys in a room, arguing over which flower is most appropriate in the famous line from Romeo and Juliet.[18] • 2001 – In the sixth episode of the first season of The Ricky Gervais Show, comedian Ricky Gervais tries to explain this theorem to Karl Pilkington, who refuses to believe it possible. In attempting to explain the mathematics behind the theorem, Gervais eventually gives up and storms out of the room when, after a long explication by Gervais and Stephen Merchant, Karl says, "If they haven't even read Shakespeare, how do they know what they're doin?"[19] • 2002 – In the 2000 Years of Radio episode Tempest FM, set after Shakespeare's death in 1616, it is revealed that his plays were written by typewriter-using monkeys that he kept enslaved in his cellar. • 2004 – In "The Science Fair Affair" episode of The Adventures of Jimmy Neutron: Boy Genius, Sheen's science fair project is having an iguana sprawled on a typewriter under the assumption that it will "write the next great American novel". • 2005 – At the end of the Robot Chicken episode "Badunkadunk", the Stoopid Monkey production logo's background is made up of upside-down text pertaining to the Infinite Monkey Theorem.[20] • 2005 – In the Veronica Mars episode "Cheatty Cheatty Bang Bang", Veronica, commenting on the sudden realization she did know David 'Curly' Moran says: "Somewhere, those million chimps, with their million typewriters, must've written King Lear." • 2006 – In June 2006, The Colbert Report featured a humorous segment on how many monkeys it would take for various works. This was in response to comments made in the news on monkeys typing out the Bible or the Qur'an. According to Colbert, one million monkeys typing for eternity would produce Shakespeare, ten thousand (drinking) monkeys typing for ten thousand years would produce Hemingway, and ten monkeys typing for three days would produce a work of Dan Brown. • 2007 – In an episode of the daytime soap opera The Young and the Restless (broadcast January and February 2007 in Canada and the USA), when Colleen Carlton copies scrambled letters obtained from the Grugeon Reliquary onto a dry board, Professor Adrian Korbel jokingly asks if she's testing the Infinite Monkey Theorem. When asked what this is, he replies: "Thomas Henry Huxley said if you gave keyboards to an infinite amount of monkeys, and gave said monkeys an infinite amount of time… Well it is safe to say…you are not the magic monkey."[21] • 2009 – The BBC Radio 4 series The Infinite Monkey Cage derives its name from the Infinite Monkey Theorem.[22] • 2011 – On an episode of the topical comedy programme Mock the Week, Comedian Micky Flanagan references it in a segment of "Scenes We'd Like to See." • 2016 – In an episode of Downton Abbey, Lady Mary remarks, “A monkey will type out the Bible if you leave it long enough.” • 2022 – The theorem is the basis for the Adult Swim miniseries "The Hamlet Factory". It follows three monkeys working in an office with infinite monkeys with typewriters trying to write Hamlet.[23] Video games • 2000 – When talking to Inspector Canard in Escape from Monkey Island, he says that "if [he] had a monkey for every time some penny-ante crook tried to pin their criminal malfeasance on Pegnose Pete...[he would] have enough monkeys to work out a reasonable sequel to Hamlet by now." • 2004 – in the PS2 game Killzone, one of the characters named Hakha remarks "it is also said that a monkey, given ample time, will write the works of Shakespeare. Comics and graphic novels • 1981 – Fone, a science fiction comic by Milo Manara. • 1989 – In the comic strip Dilbert, Dogbert tells Dilbert that his poem would take "three monkeys, ten minutes".[24] • 1990 – The Animal Man comic by Grant Morrison (a revival of the Animal Man DC character) contained an issue (Monkey Puzzles) including a monkey who typed not only the works of Shakespeare, but comic books as well. The TPB this issue is collected in (Deus ex Machina – 2003) featured an "infinite" number of Grant Morrisons typing on the cover.[25][26] • 1998 – Jason in the comic strip FoxTrot makes Peter a program to generate random numbers of the alphabet, with Peter stating that "If it works for Hamlet, why won't it work for a Hamlet book report?"[27] • 2008 – The cartoonist Ruben Bolling satirized the thought experiment in his Tom the Dancing Bug cartoon, with a monkey asking "How can I credibly delay Hamlet's revenge until Act V" in the final frame.[28] • 2008 – In a comic book written by Scott McCloud about Google Chrome, monkeys on laptops are used as an analogy to random data.[29] • 2009 – In the graphic novel Umineko: When They Cry, Bernkastel was involved in a situation which had her make a miracle out of a nearly impossible situation. This was compared to the monkey theorem, trying at random to obtain a miracle that had an incredibly low chance. Software and internet culture • 1979 – Apple Computer released Bruce Tognazzini's "The Infinite No. Of Monkeys", a humorous demonstration of Apple BASIC, on their DOS 3.2 disk for the Apple II computer. • 1995 – "The famous Brett Watson" published his Internet paper, "The Mathematics of Monkeys and Shakespeare" which was, in 2000, to be included as a reference in RFC 2795 (see below) • 1996 – Robert Wilensky once jocularly remarked, "We've all heard that a million monkeys banging on a million typewriters will eventually reproduce the entire works of Shakespeare. Now, thanks to the Internet, we know this is not true." This version of the internet analogy "began appearing as a very frequent email and web-page epigraph starting in 1997".[30] • A variant appeared in USENET at about the same time: "The Experiment has begun! A million monkeys and a million keyboards. We call it USENET." • 2000 – The IETF Internet standards committee's April Fools' Day RFC proposed an "Infinite Monkey Protocol Suite (IMPS)", a method of directing a farm of infinitely many monkeys over the Internet.[31] • 2005 – Goats, a webcomic illustrated by Jonathan Rosenberg, started in August 2005 an ongoing story line named infinite typewriters where several characters accidentally teleport to an alternate dimension. There they find that this dimension is populated by monkeys with typewriters, presumably typing the scripts of many other dimensions. • 2006 – The Infinite Monkey Project was launched by predictive text company T9. The Europe-wide project sees users, unknown to each other, text a word of their choosing to the Website. The text message is free and as it continues the words are combined to form lyrics. The lyrics are then made into a song by the Hip Hop artist Sparo which will be released as an album. If any of the tracks becomes a hit the people who texted in the words for the lyrics will receive royalties from the project.[32][33][34] • 2007 – A website named One Million Monkeys Typing was introduced, a collaborative writing site where anyone can sign up and add writing "snippets" that others can add on to, eventually creating stories with many outcomes. • 2008 – An issue of MAD shows a depiction of the Infinite Monkey Theorem which states that when good monkeys go bad, one of the infinite monkeys would surely plagiarize A Tale of Two Cities. • 2008 – Monkeys are depicted typing random bits of text in Google's online comic book advertising their Google Chrome Web Browser.[35] • 2009 – Infinite Monkey Comics was launched, which features a random comic generator that creates three-panel comics by placing a random tweet from Twitter over a random image from Flickr based on keywords of the user's choosing. The result is a nearly inexhaustible collection of potential comics generated by the random musings and typing of internet users. • 2009 – Monkeys With Typewriters draws its namesake from the theorem. • 2010 – Lyrois Beating a Million Monkeys a somewhat sarcastic look at contemporary art uses the monkeys as a metaphor. • 2011 – www.shakespearean-monkeys.com a social literature website, where the users are the monkeys. • 2013 – In the YouTube series Sword Art Online Abridged, the main character Kirito uses the phrase "monkeys and typewriters" to describe his acquaintance Klein grouping up with weaker players, implying that there is next to no chance they will all survive at their skill level. • 2015 – GoofyxGrid@Home is a BOINC volunteer computing project that checks for the Infinite Monkey Theorem. Stand-up comedy • 1960 onwards – Comedian Bob Newhart had a stand-up routine in which a lab technician monitoring an "infinitely many monkeys" experiment discovered that one of the monkeys has typed something of interest. A typical punchline would be: "Hey, Harry! This one looks a little famous: 'To be or not to be – that is the gggzornonplatt.'"[4][36][37] Music • 1979 – The debut album by Leeds punk rock band the Mekons is called The Quality of Mercy Is Not Strnen. Originally released on Virgin Records in the United Kingdom, its cover features a photo, not of a monkey, but of a typing chimpanzee. The title refers to a Shakespeare quote from The Merchant of Venice: "The quality of mercy is not strain'd".[38] See also • Model for a rare event comparison References 1. Examples of the theorem being referred to as proverbial include: Why Creativity Is Not like the Proverbial Typing Monkey Jonathan W. Schooler, Sonya Dougal, Psychological Inquiry, Vol. 10, No. 4 (1999); and The Case of the Midwife Toad (Arthur Koestler, New York, 1972, page 30): "Neo-Darwinism does indeed carry the nineteenth-century brand of materialism to its extreme limits—to the proverbial monkey at the typewriter, hitting by pure chance on the proper keys to produce a Shakespeare sonnet." The latter is sourced from Parable of the Monkeys, a collection of historical references to the theorem in various formats. 2. "The Parable of the Monkeys", as of 2007, is hosted at the website of the experimental music/dance/performance art group "Infinite Monkeys". 3. Monkeys, Typewriters and Networks Archived 2008-05-13 at the Wayback Machine, Ute Hoffmann & Jeanette Hofmann, Wissenschaftszentrum Berlin für Sozialforschung gGmbH (WZB), 2001. 4. "Hello? This is Bob", Ken Ringle, Washington Post, 28 October 2002, page C01. 5. "Notes Towards the Complete Works of Shakespeare" Archived 2007-07-16 at the Wayback Machine – some press clippings. 6. Greta Lorge, "The Best Thought Experiments: Schrödinger's Cat, Borel's Monkeys", Wired Issue 15.06, May 2007. 7. Terry Butler, "Monkeying Around with Text", University of Alberta, Computing in the Humanities Working Papers, 2007. 8. The examples included invariably refer directly to a variation on the theme of a large number of typing monkeys producing a work of literature, usually, but not always, a work by Shakespeare. Infinite libraries, and random text generation (instead of monkeys) are also included. Trivial or incomplete references are excluded. 9. Inflexible Logic Archived 2007-08-05 at the Wayback Machine, synopsis at the Mathematical Fiction database. 10. The story was reprinted in the classic four-volume The World of Mathematics by James R. Newman, published in 1956. 11. Been a long, long time Archived 2007-08-08 at the Wayback Machine, synopsis by Fred Galvin, at the Mathematical Fiction database. 12. The Stage: One-acts at Punchline, Mel Gussow, The New York Times, 15 January 1987. 13. It's All in the Laughing, All in the Timing will have you in stitches Archived 2012-03-28 at the Wayback Machine, review by Melissa Bearns for Eugene Weekly, 4 June 2006. 14. "Last Exit To Springfield". Simpson Crazy. Archived from the original on 19 August 2008. 15. Woo-hoo! A look at the 10 best 'Simpsons' episodes ever, Press & Sun-Bulletin, 27 July 2007. "The genius of this joke is a child can laugh at it, but those who understand the allusion to Charles Dickens and the infinite monkey theorem can laugh on another level." 16. "Molson Monkeys", Advertising Age, June 1998 17. "A Troo Storee", TV.com episode guide: "Weasel tries to test the "monkeys typing Shakespeare" theorem". 18. Family Guy official website – script of the "Monkeys Writing Shakespeare" scene. Archived June 23, 2006, at the Wayback Machine 19. XFM archives "Season 1 Vol. 6", "Do you know what he said to me? I explained it to him, I said 'You've got an infinite number of monkeys, an infinite number of typewriters, they will type the complete works of Shakespeare.' He said, 'Have they read Shakespeare?'" 20. The Robot Chicken Wiki Archived 2011-09-30 at the Wayback Machine – Screenshot of Robot Chicken Stoopid Monkey production logo that refers to the Infinite Monkey Theorem 21. Episode transcript Archived 2007-11-12 at the Wayback Machine, at tvmegasite.net 22. "BBC Radio 4 - the Infinite Monkey Cage, Series 1". 23. Twitter https://twitter.com/philjamesson/status/1515003442200301568. Retrieved 2022-04-15. {{cite web}}: Missing or empty |title= (help) 24. "Dilbert Comic Strip on 1989-05-15 | Dilbert by Scott Adams". 25. Grant Morrison's Animal Man #8-26, Jonathan Woodward, "Issue #25, July '90: "Monkey Puzzles" […] The text in the typewriter is Morrison's script for this issue. The monkey, of course, is the famous one who, given an infinite amount of time, will eventually write out the complete Shakespeare, completely at random." 26. Animal Man, Book 3 – Deus Ex Machina (Paperback), Amazon.com scan of the book cover. 27. "FoxTrot by Bill Amend for October 09, 1998 - GoComics". GoComics. 28. ""Tom the Dancing Bug July 2008". Gocomics.com. Retrieved 2019-09-17. 29. "Google Chrome". Retrieved 2008-09-04. 30. "Parable of the Monkeys". Angelfire.com. Retrieved 2019-09-17. 31. S. Christey (1 April 2000). "RFC 2795: The Infinite Monkey Protocol Suite (IMPS)". Retrieved 2006-06-13. 32. "The articulate monkeys". Computer Music. Retrieved 2006-11-09. 33. "Infinite Monkey Project wants your texts". Pocket-lint. Retrieved 2006-11-09. 34. "The Infinite Monkey Project". Crossfire. Retrieved 2006-11-09. 35. "10th Page of Google Chrome comic book". 36. Flashback: Computer poetry from 1985, Al Fasoldt, The Syracuse Newspapers, 1985. 37. The date of 1960 is given in Monkeying Around with Text, Terry Butler, University of Alberta, Computing in the Humanities Working Papers, January 2007. 38. Mekons fansite – picture and commentary on the album and cover: "This unusual title was drawn from the axiom that, if you give a monkey a typewriter and an infinite amount of time, it would eventually produce the complete works of Shakespeare, a wry comment on the group's own musical ability. The rest of the Shakespeare quote appears on the Mekons Story". The last sentence refers to the later collection The Mekons Story, which included the song 'It Falleth Like Gentle Rain from Heaven'. External links • The Parable of the Monkeys, a bibliography with quotations
Wikipedia
Technical advance New ways of estimating excess mortality of chronic diseases from aggregated data: insights from the illness-death model Ralph Brinks1,2, Thaddäus Tönnies1 & Annika Hoyer1 BMC Public Health volume 19, Article number: 844 (2019) Cite this article Recently, we have shown that the age-specific prevalence of a disease can be related to the transition rates in the illness-death model via a partial differential equation (PDE). The transition rates are the incidence rate, the remission rate and mortality rates from the 'Healthy' and 'Ill' states. In case of a chronic disease, we now demonstrate that the PDE can be used to estimate the excess mortality from age-specific prevalence and incidence data. For the prevalence and incidence, aggregated data are sufficient - no individual subject data are needed, which allows application of the methods in contexts of strong data protection or where data from individual subjects is not accessible. After developing novel estimators for the excess mortality derived from the PDE, we apply them to simulated data and compare the findings with the input values of the simulation aiming to evaluate the new approach. In a practical application to claims data from 35 million men insured by the German public health insurance funds, we estimate the population-wide excess mortality of men with diagnosed type 2 diabetes. In the simulation study, we find that the estimation of the excess mortality is feasible from prevalence and incidence data if the prevalence is given at two points in time. The accuracy of the method decreases as the temporal difference between these two points in time increases. In our setting, the relative error was 5% and below if the temporal difference was three years or less. Application of the new method to the claims data yields plausible findings for the excess mortality of type 2 diabetes in German men. The described approach is useful to estimate the excess mortality of a chronic condition from aggregated age-specific incidence and prevalence data. The article does not report the results of any health care intervention. Recently, we have shown that the age-specific prevalence of a health state or disease can be related to the transition rates in the illness-death model via a partial differential equation (PDE) [1, 2]. The transition rates are the incidence rate, the remission rate and mortality rates from the Healthy and Ill states (Fig. 1). In case of a chronic disease, i.e. a disease with no remission, this relation can be used to estimate the incidence from a sequence of cross-sectional studies if information about mortality is available [3]. This might be an alternative way to estimate the incidence of a chronic condition in situations where follow-up studies are challenging to conduct or not feasible at all. Illness-death model. The transition rates i (incidence), r (remission), m0 (mortality of the healthy), m1 (mortality of the diseased) between the compartments depend on calendar time t and age a. In case of chronic diseases, there is no way back from the Ill state to the Healthy state (dashed line). Then, the remission rate r equals zero In this article, we demonstrate that it is also possible to estimate excess mortality from age-specific prevalence and incidence of a chronic disease. This can be useful for the analysis of data where it is difficult to observe mortality directly, for instance in disease registers [4] or health insurance claims data where cases of death might be reported with a delay [5]. Another example where excess mortality of a chronic condition cannot be estimated directly is the US National Health Interview Survey (NHIS) from the National Center for Health Statistics [6]. NHIS is a yearly cross-sectional household interview survey with up to 90,000 participants each year. Usually, participants are followed up for mortality by linkage to the National Death Index. This implies that it is possible to check the vital status of a participant from a previous cross-sectional interview, but it is not possible to decide if a deceased participant who had been disease-free at the interview, has contracted the disease in the period between the cross-section and the date of death. With other words, for a subject disease-free at the interview, it is not possible to determine the disease status at death. Thus, in estimating the mortality it is uncertain to attribute this case to the mortality of the healthy or of the diseased subjects. To overcome these problems, we examine mathematical relations of the illness-death model and associated PDEs to develop reliable estimators for excess mortality. Illness-death model We consider the illness-death model as shown in Fig. 1. Each subject of the population is in one of the relevant disease states, Healthy (with respect to the considered chronic disease) Ill or Dead. Let the number of people aged a at calendar time t in the Healthy and Ill states be denoted by H(t, a) and I(t, a), respectively. Subjects can transit from both states into the (absorbing) state Dead. The transition rates between the three states are the incidence rate (i), the remission rate (r), the mortality rate of the healthy (m0) and the mortality rate of the diseased (m1). These rates usually depend on calendar time t and on age a. Henceforth, we consider only chronic, i.e., irreversible diseases, which is equivalent to a remission rate of zero (r = 0). To develop estimators for the excess mortality Δm = m1 – m0, we use mathematical relations between the incidence, prevalence p(t, a) = I(t, a)/{I(t, a) + H(t, a)} and the mortality rates in the illness-death model. An alternative epidemiological measure to Δm for assessing discrepancies between the mortality rates m0 and m1, is the mortality rate ratio R = m1/m0 which is of potential interest for practitioners. The mortality rate ratio R expresses the mortality rate of the diseased people relative to the non-diseased at the same age. Due to this plain interpretation, R is more often used than the (absolute) excess mortality Δm. Both measures, Δm and R, are related by R = 1 + Δm/m0. Direct estimation in simulated data about dementia To illustrate how measures of excess mortality in a chronic disease can directly be estimated from incidence and prevalence data, we conduct a simulation study. We mimic a sequence of two cross-sectional studies for a chronic disease in two different years t1 and t2 centered at the year t = 2000. Let ΔT = t2 – t1denote the difference between t1 and t2, i.e. t1 = 2000 – ΔT/2 and t2 = 2000 + ΔT/2. In each of the cross-sectional studies at t1 and t2, the age-specific prevalence p is surveyed (Fig. 2). Prevalence data from two cross-sections at time t1 and t2 are used to estimate the excess mortality midpoint at t = t1 + ΔT/2 = t2 - ΔT/2 (figure adopted from [Bri16]]) The aim is to estimate the excess mortality at year t = 2000 from the cross-sectional prevalence data at t1 and t2 and the incidence. To assess the impact of the temporal difference between the cross-sectional studies, we vary ΔT from 0.1 to 10 (years). Together with the age-specific incidence rate i at t = 2000, the prevalence data in the two years t1 and t2 serve as input values to estimate the excess-mortality in the year t = 2000. The estimated excess mortality is then compared with the rates used to set up the simulation study in terms of absolute and relative bias. The input data for the simulation are motivated from survey data about dementia in the female population of Europe [7]. Dementia is a major health problem in many countries with potentially increasing prevalence in the future [8]. The age-specific prevalence p for each of the two years t1 and t2 is calculated analytically with the incidence rate i from [7]. The age-specific mortality rate m0 of the dementia-free population is chosen to be m0(t, a) = exp.(− 10.7 + 0.1a + t ln(0.99)) aiming to approximate the mortality of the European population based on the Gompertz-Makeham law of mortality [9]. In addition, we assume that the mortality m1 of the diseased people can be written as a product of m0 and R: m1(t, a) = R(t, a) × m0(t, a) with log R(t, a) = log(3) + [log(1.5) – log(3)] (a – 60)/(90–60). The rationale for choosing this R is based on the idea that m1 also follows a Gompertz-Makeham law. Then, the logarithm of the quotient m1/m0 is a straight line as given here. The specific numerical values in the definition of R are chosen to mimic the age-dependency as reported in [10], where R was found to be about 3 and 1.5 at 60 and 90 years of age, respectively. Note, however, that in this simulation we want to demonstrate feasibility of the method in a realistic range of parameters. We do not aim for the best obtainable agreement between our input data and the observed data. Bayes estimation and application to claims data After describing the direct estimation, we present an estimation method in the framework of Bayesian inference. Bayes methods are increasingly used in applied statistics because they provide a flexible framework for the analysis of scientific problems and quantifying uncertainty in their solution [11]. As an application of the Bayesian approach, we estimate the excess mortality of type 2 diabetes in the year 2012 from claims data comprising 35 million German men. Goffrier and colleagues [12] reported the age-specific prevalence of diabetes among German men in the years t1 = 2009 and t2 = 2015 as shown in Fig. 3. In the same work, the age-specific incidence rate i in 2012 has been surveyed. The data for this analysis is publicly available and can be found in [12]. Surveyed age-specific prevalence p of type 2 diabetes in German men in 2009 (black line with circles) and 2015 (blue with crosses) Our aim in the diabetes example is to estimate the age-specific mortality rate ratio R in the range 50 to 90 years of age. Recently, for a smaller age range the mortality rate ratio has been estimated in Tönnies et al. [13]. Compared to [13] we extend the age range by the novel Bayesian approach. The idea for the Bayes method is that for given age-specific prevalence p, incidence rate i and general mortality m, an estimate of the excess mortality in terms of the mortality rate ratio R is desired. According to the Theorem of Bayes [11] we obtain. $$ \mathrm{f}\left(R\ |p\right)\propto \mathrm{f}\left(p|R\right)\ \mathrm{x}\ \mathrm{f}(R) $$ where f(R |p) is the posteriori distribution of R, f(p|R) is the probability density function of p given R and f(R) denotes the priori distribution of R. For clarity, we assume that i and m are known. Motivated by empirical findings from the Danish Diabetes Register [14], we assume that the logarithm of the age-specific mortality rate ratio R approximately is a straight line in the age range 50 to 90 years: $$ \log \left(R(a)\right)=\log \left(R(50)\right)+\left[\log \left(R(90)\right)-\log \left(R(50)\right)\right]\ \left(a-50\right)/\left(90-50\right) $$ For estimation of R(50) and R(90) in Eq. (2), we use weakly informative prior distributions R(50) ~ U(2; 9) and R(90) ~ U(1; 2); again inspired by the Danish diabetes register. U(v; w) means the continuous uniform distribution with minimum and maximum value v and w, respectively. In Bayesian terminology, our aim is to estimate the joint a-posteriori distribution for R(50) and R(90). To use Eq. (1) for the estimation of R given p, we apply three steps: 1) values for R(50) and R(90) are drawn from the uniform prior distributions, 2) solving the PDE with initial condition p(2009; a) as given in [12] and 3) comparing the calculated solution p in 2015 with the surveyed values. For solving the PDE, we use the Method of Characteristics [15] to first convert the PDE into an ordinary differential equation (ODE) and then, second, solve the ODE by the Runge-Kutta Method of fourth order [16]. Next, the calculated prevalence in 2015, p(2015; a), is compared with the observed prevalence in 2015 given by [12]. The age-specific prevalences p in the years 2009 and 2015 are shown as black and blue lines in Fig. 3, respectively. As conditional distribution f(p|R), we chose the multivariate normal distribution. $$ \mathrm{f}\left(p|R\right)\propto \exp .\left(-{\left({p}_{\mathrm{mod}}-{p}_{\mathrm{obs}}\right)}^{\mathrm{t}}\ {\Sigma}^{-1}\ \left({p}_{\mathrm{mod}}-{p}_{\mathrm{obs}}\right)/2\right) $$ where pmod = pmod(R) is the solution of the PDE for a given R. The conditional distribution f(p|R) assesses the differences between the modeled pmod and observed prevalences pobs. The covariance matrix Σ is estimated by following diagonal matrix: $$ \Sigma =\operatorname{diag}\left({p}_{\mathrm{j}}\ \left(1-{p}_{\mathrm{j}}\right)/{n}_{\mathrm{j}}\right) $$ with age-specific prevalences pj and the corresponding number of people nj in the age group j. Choosing the covariance matrix as a diagonal matrix makes the implicit assumption that the prevalences pj are stochastically independent. A justification for this assumption is the fact that people belonging to one age group are different from the people in another age group. In a sensitivity analysis, we released the assumption of weakly informative priors (R(50) ~ U(2; 9), R(90) ~ U(1; 2)) and examined the impact on the estimation of R(50) and R(90). For this, we choose R(50) and R(90) from a bivariate normal distribution with mean (5.5, 1.5), standard deviation of 1 and 0.1 in R(50) and R(90), respectively, and a correlation coefficient of 0.9 between R(50) and R(90). These assumptions lead to the following covariance matrix for the joint distribution of R(50) and R(90): $$ \left(\begin{array}{cc}{1}^2& 0.9\times 1\times 0.1\\ {}0.9\times 1\times 0.1& {0.1}^2\end{array}\right) $$ The age-specific prevalence p(t, a) = I(t, a)/{H(t, a) + I(t, a)} i.e., the percentage of people aged a at time t who are chronically ill, is the solution of the following partial differential equation (PDE): $$ \left({\partial}_t+{\partial}_a\right)p=\left(1-p\right)\left\{i-p\left({m}_1-{m}_0\right)\right\} $$ In Eq. (3), ∂t and ∂a denote the partial derivatives with respect to t and a, respectively. The mathematical proof for Eq. (1) can be obtained from examining the change rates of the number of healthy and ill people in the illness-death model (H and I in Fig. 1) [17] or by using the theory of stochastic processes [2]. Eq. (3) implies that the excess mortality Δm = m1 – m0 can directly be estimated from the incidence rate i, prevalence p and the temporal change of the prevalence ((∂t + ∂a)p): $$ \triangle m=\left[i-\frac{\left({\partial}_t+{\partial}_a\right)p}{1-p}\right]/p $$ Note that for direct estimation of the excess mortality Δm by Eq. (2) only the incidence rate i and the prevalence based figures p and (∂t + ∂a) p are necessary. No additional data are needed. Instead of using Eq. (3) for a relation between the incidence, prevalence and mortality, an alternative way is possible by considering the prevalence-odds θ(t, a) = I(t, a)/H(t, a). For the prevalence-odds θ we find the following PDE, which is equivalent to Eq. (3): $$ \left({\partial}_t+{\partial}_a\right)\theta =i-\theta \left({m}_1-{m}_0-i\right) $$ Equation (5) was first published by Brunet and Struchiner [18]. The derivation is given in an additional file [Additional file 1]. Compared to Eq. (3) the PDE (5) has the advantage of being linear. Solving PDEs like Eq. (3) and (5) is usually accomplished by transformation into an equivalent ordinary differential equation by the Method of Characteristics [15]. In case of Eq. (3), the resulting ordinary differential equation is of Ricatti type [19], which in general can only be solved numerically because an explicit representation of the general solution does not exist [20]. In case of the equivalent Eq. (5), however, an explicit representation of the solution indeed is possible. As detailed in the additional file [see Additional file 1] it holds: $$ \theta \left(t,a\right)=\underset{0}{\overset{a}{\int }}i\left(t-s,a-s\right)\exp \left(-{\varphi}_{t,a}(s)\right) ds $$ For brevity, in Eq. (6) it was set \( {\varphi}_{t,a}(x)={\int}_0^x\left[{m}_1-{m}_0-i\right]\left(t-x+\tau, a-x+\tau \right) d\tau \) The explicit representation of the solution θ in Eq. (6) allows to calculate θ with any prescribed accuracy, e.g. by Romberg integration [16], which we will use in the examples below. Applying the back-transformation p = θ/(1 + θ) yields the prevalence p. For later use, we note that Eq. (3) can also be expressed in terms of the mortality rate ratio R and the general mortality m = p m1 + (1 – p) m0: $$ \left({\partial}_t+{\partial}_a\right)p=\left(1-p\right)\left\{i-m\frac{p\left(R-1\right)}{1+p\left(R-1\right)}\right\} $$ Direct estimation: dementia in the female population of Europe After calculating the prevalence-odds θ in years t1 and t2 by Eq. (6), the associated prevalences p = θ/(1 + θ) are calculated. Figure 4 shows the age-specific prevalences for the years t1 = 1990 (dashed line) and t2 = 2010 (solid line). To demonstrate that our simulated prevalence has a reasonable range, we additionally plotted the surveyed values for European women reported in [8]. The proposed method to estimate the excess mortality Δm in the year t = 2000 is the direct application of Eq. (4). The partial derivative (∂t + ∂a)p in Eq. (4) is approximated by a finite difference: $$ \left({\partial}_t+{\partial}_a\right)p\left(t,a\right)\approx p\left(t+\triangle T/2,a+\triangle T/2\right)-p\left(t-\triangle T/2,a-\triangle T/2\right)/\triangle T $$ Simulated age-specific prevalence p of dementia in European women 1990 (solid black line) and 2010 (dashed black line). For comparison, the surveyed values in 2000 are plotted as blue dots Then, the excess mortality Δm can be estimated by plugging these numbers into Eq. (4). In case the mortality rate m0 of the non-diseased is known, the age-specific mortality rate ratio can be calculated by R = 1 + Δm/m0. Table 1 shows the true and estimated values for R at different ages and various choices of ΔT. Table 1 True and estimated mortality rate ratios From Table 1 we can see that the absolute relative Error increases as the temporal difference ΔT between the cross-sections increases and that absolute relative error increases as the age decreases. In the extreme case (age 60, ΔT = 10), the absolute relative error reaches nearly 30%. This indicates that in case two cross-sectional studies are separated by more three years (i.e., ΔT > 3) the method yields feasible results only in the higher age groups. Bayesian estimation of excess mortality in male diabetics from Germany The log-likelihood of the a-posteriori distribution f(R|p) ∝ f(p|R) × f(R) is shown in Fig. 5. The black cross indicates the maximum aposteriori (MAP) estimator for these data, which is given by RMAP(50) = 4.47 and RMAP(90) = 1.39. We obtain the estimates for R(50) and R(90) including 95% credibility intervals as shown in Table 2. Contour plot of the posteriori likelihood of the mortality rate ratio R at ages 50 (abscissa) and 90 (ordinate). The maximum a posteriori (MAP) estimator is indicated as a black cross Table 2 Estimated mortality rate ratios for the diabetes data These values agree well with the empirical findings from the Danish Diabetes Register [14], where values slightly below 4 and slightly above 1.5 have been found for ages 50 and 90 years, respectively. In the sensitivity analysis with bivariate normal prior distributions, the MAP estimator changed only slightly to RMAP(50) = 4.54 and RMAP(90) = 1.38. In this work, we have described how the illness-death model can be used to obtain information about excess mortality in case prevalence and incidence are given. It turns out that the excess mortality can be calculated by the incidence rate, the prevalence and the temporal change of the prevalence (see Eq. (4)). In data where these figures are estimable, insights into the excess mortality of people with chronic diseases compared to the people without the disease can be gained. As applications, simulated data about dementia and claims data about diabetes have been analyzed. For the dementia example we estimated the excess mortality directly and for the diabetes data we formulated a Bayesian approach. Both methods were based on aggregated data only (age-specific prevalence and incidence rate) and do not require data from individual subjects. Aggregated data can be found frequently in the literature, which makes the proposed method suitable for many applications, especially when the research question is aimed at population-wide measures. Here, we have chosen aggregated data about diabetes from the statutory health insurance in Germany based on about 35 million men. Based on the age-specific prevalence in 2009, we used non-informative priors for mortality rate ratio R and the PDE (7) to estimate the aposteriori likelihood of R given the age-specific prevalence in 2015. In this way, the PDE can therefore be seen as the data generating process underlying the prevalence data. In a sensitivity analysis, we used more informative prior distributions (bivariate normal) and found that the estimated values for the mortality rate ratios changed only slightly. Main reason for this robustness is due to the large number of people in the prevalence data. Our approach has two limitations. The first limitation stems from the fact that Eqs. (3) and (5) are only valid if migration into and or from the considered population does not take place or if the prevalence of the chronic condition in migrants is similar to the prevalence in the resident population [21]. If migration happens on a considerable magnitude and if the prevalence in the migrants is substantially different from the residents, adoptions to Eq. (3) are possible [21]. The second limitation of our novel approach becomes visible in the simulation study about dementia: The two (or more) cross-sectional surveys for estimating the change of the prevalence should not be separated too much. In our simulation, the surveys should be conducted within a period of three years (or less) (i.e., ΔT ≤ 3) to have a relative error below 5%. If the two cross-sections are separated by ten years (ΔT = 10), the relative error has reached up to 30%. In the diabetes example, the two cross-sections were separated by six years (ΔT = 6). Based on this, we expect the relative errors of our estimates R(50) and R(90) to be about 10%. For comparison, the width of the credibility intervals for our estimates R(50) and R(90) have a similar magnitude. Thus, we would conclude a relative error of 10% in the mortality rate ratio is a rough estimate of the magnitude of accuracy that can be obtained from our method applied to these data. In the current analysis, no attempt has been taken to examine the effect of smaller population sizes, i.e., how sampling uncertainty in the age-specific prevalence and incidence affects the estimates of the excess mortality. Furthermore, we have not analyzed the robustness of the estimation methods against misclassification error (i.e., false positive and false negative rates in input prevalence and incidence data). Questions about sample sizes and misclassification are currently analyzed and will be subject to a future paper providing more technical details. The described approach is useful to estimate the excess mortality of a chronic condition from aggregated incidence and prevalence data. The feasibility has been demonstrated in a simulation study about dementia and in claims data about diabetes in German men. The source code for generating the data about dementia in Europe is available as an electronic supplement to this published article [Additional file 2]. The data for the diabetes example were taken from a published source [12], which has been cited in the text. Maximum aposteriori NHIS: PDE: Partial differential equation Brinks R, Landwehr S. Change rates and prevalence of a dichotomous variable: simulations and applications. PLoS One. 2015;10(3). https://doi.org/10.1371/journal.pone.0118955. Brinks R, Hoyer A. Illness-death model: statistical perspective and differential equations. Lifetime Data Anal. 2018;24(4):743–54. https://doi.org/10.1007/s10985-018-9419-6. Brinks R, Hoyer A, Landwehr S. Surveillance of the incidence of non-communicable diseases (NCDs) with sparse resources: a simulation study using data from a National Diabetes Registry, Denmark, 1995–2004. PLoS One. 2016;11(3):e0152046. https://doi.org/10.1371/journal.pone.0152046. Egeberg A, Kristensen LE. Impact of age and sex on the incidence and prevalence of psoriatic arthritis. Ann Rheum Dis. 2018;77:e19. https://doi.org/10.1136/annrheumdis-2017-211980. Tamayo T, Brinks R, Hoyer A, Kuß OS, Rathmann W. The prevalence and incidence of diabetes in Germany. Dtsch Arztebl Int. 2016;113(11):177–82. https://doi.org/10.3238/arztebl.2016.0177. National Center for Health Statistics of the Centers for Disease Control and Prevention (CDC) About the National Health Interview Survey https://www.cdc.gov/nchs/nhis/about_nhis.htm. Accessed on 5 Apr 2019. Fratiglioni L, Launer LJ, Andersen K, Breteler MM, Copeland JR, Dartigues JF, Lobo A, Martinez-Lage J, Soininen H, Hofman A. Incidence of dementia and major subtypes in Europe: a collaborative study of population-based cohorts. Neurologic diseases in the elderly research group. Neurology. 2000;54(11 Suppl 5):S10–5. Lobo A, Launer LJ, Fratiglioni L, Andersen K, Di Carlo A, Breteler MM, Copeland JR, Dartigues JF, Jagger C, Martinez-Lage J, Soininen H, Hofman A. Prevalence of dementia and major subtypes in Europe: a collaborative study of population-based cohorts. Neurologic Diseases in the Elderly Research Group. Neurology. 2000;54(11 Suppl 5):S4–9. Missov TI, Lenart A. Gompertz–Makeham life expectancies: expressions and applications. Theor Pop Bio. 2013;90:29–35. Rait G, Walters K, Bottomley C, Petersen I, Iliffe S, Nazareth I. Survival of people with clinical diagnosis of dementia in primary care: cohort study. BMJ. 2010;341:c3584. https://doi.org/10.1136/bmj.c3584. Gelman A, Stern HS, Carlin JB, Dunson DB, Vehtari A, Rubin DB. Bayesian data analysis. London: Chapman and Hall/CRC; 2013. Goffrier B, Schulz M, Bätzing-Feigenbaum J. Administrative Prävalenzen und Inzidenzen des diabetes mellitus von 2009 bis 2015. Versorgungsatlas. 2017. https://doi.org/10.20364/VA-17.03. Tönnies T, Hoyer A, Brinks R. Excess mortality for people diagnosed with type 2 diabetes in 2012 - estimates based on claims data from 70 million Germans. Nutr Metab Cardiovasc Dis. 2018;28(9):887–91. https://doi.org/10.1016/j.numecd.2018.05.008. Carstensen B, Kristensen JK, Ottosen P, Borch-Johnsen K. Steering Group of the National Diabetes Register. The Danish National Diabetes Register: trends in incidence, prevalence and mortality. Diabetologia. 2008;51(12):2187–96. https://doi.org/10.1007/s00125-008-1156-z. Polyanin AD, Zaitsev VF, Moussiaux A. Handbook of first-order partial differential equations: CRC Press; 2001. Dahlquist G, Björck A. Numerical methods. Englewood Cliffs: Prentice-Hall; 1974. Brinks R, Landwehr S. A new relation between prevalence and incidence of a chronic disease. Mathematical Medicine and Biology. 2015;32(4):425–35. https://doi.org/10.1093/imammb/dqu024. Brunet RC, Struchiner CJ. A non-parametric method for the reconstruction of age-and time-dependent incidence from the prevalence data of irreversible diseases with differential mortality. Theor Pop Bio. 1999;56(1):76–90. Brinks R. Illness-death model in chronic disease epidemiology: characteristics of a related differential equation and an inverse problem. Comp Math Meth Med. 2018. https://doi.org/10.1155/2018/5091096. Kamke E. Differentialgleichungen Lösungsmethoden und Lösungen. Leipzig: Teubner Verlag; 1983. Brinks R, Landwehr S. Age- and time-dependent model of the prevalence of non-communicable diseases and application to dementia in Germany. Theor Popul Biol. 2014;92:62–8. https://doi.org/10.1016/j.tpb.2013.11.006. The authors wish to thank the Zentralinstitut für Kassenärztliche Versorgung, Berlin, for making the claims data available. This research did not receive any funding. Institute for Biometry and Epidemiology, German Diabetes Center, Auf'm Hennekamp 65, 40225, Duesseldorf, Germany Ralph Brinks, Thaddäus Tönnies & Annika Hoyer Department and Hiller Research Unit for Rheumatology, University Hospital Duesseldorf, Moorenstr. 5, 40225, Duesseldorf, Germany Ralph Brinks Thaddäus Tönnies Annika Hoyer RB had the initial idea for this work, developed the source code and drafted the manuscript. TT and AH critically discussed the ideas and revised the manuscript. All authors gave substantial intellectual contributions, read and approved the final manuscript. Correspondence to Ralph Brinks. This study does not involve data from human participants (simulation about dementia) or solely relies on publically available secondary data (aggregated claims data [12]). Therefore, consent to participate is not required. The Ethics Board of the University Hospital Duesseldorf has confirmed that in case of published data, no review of the Ethics Board is necessary. Not necessary because this manuscript does not contain data from any individual person. Microsoft Word file (doc) providing details of the mathematical background of Eqs. (5) and (6). (DOC 34 kb) Script (plain text file, accessible via any text editor, e.g., Notepad, GNU Emacs etc) for the dementia simulation study, intended to use with the statistical software R (The R Foundation of Statistical Software). (R 4 kb) Brinks, R., Tönnies, T. & Hoyer, A. New ways of estimating excess mortality of chronic diseases from aggregated data: insights from the illness-death model. BMC Public Health 19, 844 (2019). https://doi.org/10.1186/s12889-019-7201-7 Chronic disease epidemiology Multi-state model Bayes estimation
CommonCrawl
What is the safe distance to a supernova explosion? In other words, what stars near the Sun may have an impact on the Solar system equilibrium or the Earth life if they become supernova ? Is SN 1987 A too far ? astrophysics atmospheric-science plasma-physics solar-system supernova honeste_vivere $\begingroup$ When I asked this one , I was checking this assumption : "more the question is stupid, better it is upvoted". $\endgroup$ – user46925 Oct 12 '16 at 22:53 I worked this out a little while back in order to check something said on one of these Nova or other science show specials. I wanted to know how much energy would be required to remove the entire atmosphere of the Earth and whether a supernova (or other astronomical event) could possibly do this. Let's assume the following quantities: $M_{E}$ = mass of Earth $\sim 5.9742 \times 10^{24} \ kg$ $R_{E}$ = mean equatorial radius of Earth $\sim 6.378140 \times 10^{6} \ m$ $h_{E}$ = mean scale height of Earth's atmosphere $\sim 10 \ km$ $AU$ = astronomical unit $\sim 1.49598 \times 10^{11} \ m$ or $\sim 2.0626 \times 10^{5} \ parsecs$ Let's assume Earth's atmosphere has the following concentrations by volume: $N_{2} \sim 78.08$% $O_{2} \sim 20.95$% $Ar \sim 0.93$% $C O_{2} \sim 0.039$% To start, we find the total volume of Earth's atmosphere given by: $$ V_{atm} = 4 \pi \int_{a}^{b} \ dr r^{2} = \frac{4 \pi}{3} r^{3} \vert_{a}^{b} $$ where we assume $a = R_{E}$ and $b = R_{E} + h_{E}$, which gives us a volume of $V_{atm} \sim 5.120 \times 10^{18} \ m^{3}$. Thus, we can estimate fractional volumes of each constituent gas to be: $N_{2} \sim 3.998 \times 10^{18} \ m^{3}$ $O_{2} \sim 1.073 \times 10^{18} \ m^{3}$ $Ar \sim 4.762 \times 10^{16} \ m^{3}$ $C O_{2} \sim 1.997 \times 10^{15} \ m^{3}$ This allows us to estimate to total number of particles for each constituent gas using: $$ N_{j} = V_{j} \times \frac{ 1 }{ V_{atm} } \times N_{A} $$ where $N_{A}$ is the Avogadro constant and $V_{j}$ is the fractional volume of species $j$. This gives us the following values for $N_{j}$: $N_{2} \sim 1.074 \times 10^{44} \ molecules$ $O_{2} \sim 2.882 \times 10^{43} \ molecules$ $Ar \sim 1.279 \times 10^{42} \ molecules$ $C O_{2} \sim 5.365 \times 10^{40} \ molecules$ Now we estimate the total number of moles of each constituent gas using: $$ M_{j} = \frac{ N_{j} }{ N_{A} } $$ which gives us: $N_{2} \sim 1.784 \times 10^{20} \ moles$ $O_{2} \sim 4.786 \times 10^{19} \ moles$ $Ar \sim 2.124 \times 10^{18} \ moles$ $C O_{2} \sim 8.910 \times 10^{16} \ moles$ Ionizing Earth's Atmosphere As a first approximation, we can assume that if the atmosphere were ionized, it may be easier to lose it (e.g., see answer that discusses this). Thus, let's see how much energy is needed to ionize the atmosphere. We can look up the ionization energy for argon and the dissociation energy for each of the other molecules, given to be: $E_{i,Ar} \sim 1520.6 \ kJ \ mole^{-1}$ $E_{d,N2} \sim 945 \ kJ \ mole^{-1}$ $E_{d,O2} \sim 497 \ kJ \ mole^{-1}$ $E_{d,CO} \sim 360 \ kJ \ mole^{-1}$ $\rightarrow E_{d,CO2} \sim 720 \ kJ \ mole^{-1}$ Using these values and the number of moles of each species, we can estimate the total energy needed to ionize all the argon and dissociate all the other constituent gases, which gives us: $N_{2} \sim 1.686 \times 10^{26} \ J$ $O_{2} \sim 2.378 \times 10^{25} \ J$ $C O_{2} \sim 6.414 \times 10^{22} \ J$ $Ar \sim 3.230 \times 10^{24} \ J$ A typical supernova (i.e., Type Ia) releases something like $\sim 10^{44} \ J$ of total energy (Note that hypernovae can release more and other stellar events can produce more energy, but more on that later.). If we assume all of that energy is directly injected to ionize the atmosphere and that it radiates from the source in a spherically symmetric manner, then the intensity will decrease as $\sim r^{-2}$, where $r$ is the distance from the source emitter (i.e., supernova) to the absorber (i.e., Earth's atmosphere). Ignoring angle of incidence issues, the absorbing area of the Earth is just $4 \ \pi R_{E}^{2} \sim 5.099 \times 10^{8} \ km^{2}$ or $\sim 5.099 \times 10^{14} \ m^{2}$. We can estimate the minimum safe distance by comparing the energies and ignore any losses by the absorber, which gives us a zeroth approximation: $$ A_{source} \ E_{abs} = A_{abs} \ E_{source} \\ r_{source}^2 = r_{abs}^2 \frac{ E_{source} }{ E_{abs} } $$ where $source$ is the energy source (i.e., supernova here) and $abs$ is the absorber (i.e., Earth's atmosphere). If we solve for $r_{source}$ as our minimum safe distance for each constituent gas individually, we have: $r_{source}$ for $N_{2} \sim 4.906 \times 10^{15} \ m$ or $\sim 33,000 \ AU$ or $\sim 0.16 \ parsecs$ $r_{source}$ for $O_{2} \sim 1.307 \times 10^{16} \ m$ or $\sim 87,000 \ AU$ or $\sim 0.42 \ parsecs$ $r_{source}$ for $C O_{2} \sim 2.515 \times 10^{17} \ m$ or $\sim 1,680,000 \ AU$ or $\sim 8.15 \ parsecs$ $r_{source}$ for $Ar \sim 3.544 \times 10^{16} \ m$ or $\sim 237,000 \ AU$ or $\sim 1.15 \ parsecs$ So in the grand scheme of things, these distances are small enough to suggest that most stars are far enough away that they will not completely ionize our atmosphere. Energizing Earth's Atmosphere What if we tried to determine how much energy would be necessary to increase the particles mean kinetic energy such that their most probable speeds exceeded the escape speed of Earth's gravity? At STP the constituent gases considered have thermal speeds (i.e., most probable speeds) of: $N_{2} \sim 417.15 \ m/s$ $O_{2} \sim 390.31 \ m/s$ $C O_{2} \sim 332.82 \ m/s$ $Ar \sim 349.33 \ m/s$ The difference in kinetic energy between their STP energy and escape speed energy is given by: $$ \Delta K_{j} = \frac{ 1 }{ 2 } m_{j} \left( V_{esc}^{2} - V_{Tj}^{2} \right) $$ which is, for each constituent gas, given by: $N_{2} \sim 2.904 \times 10^{-18} \ J$ $O_{2} \sim 3.318 \times 10^{-18} \ J$ $C O_{2} \sim 4.565 \times 10^{-18} \ J$ $Ar \sim 4.143 \times 10^{-18} \ J$ If we multiply these values by the total number of particles we estimated previously, $N_{j}$, then we can estimate the total energy needed to effectively evaporate the atmosphere of each constituent gas. The energies needed are: which corresponds to a total energy of $\sim 4.131 \times 10^{26} \ J$. Using a similar approach as for the ionization above, we get minimum safe distances of: $r_{source}$ for $C O_{2} \sim 1.287 \times 10^{17} \ m$ or $\sim 860,000 \ AU$ or $\sim 4.17 \ parsecs$ So again, these distances are small enough to suggest that most stars are far enough away that they will not completely evaporate our atmosphere. The above estimates are for absolute devistation and are only valid given the assumptions. Note that an extinction level event probably would not require the total ionization or evaporation of Earth's atmosphere. Rather, only a fraction of the atmosphere would need to be ionized or evaporated to cause problems, as the two links provided by @BowlOfRed suggest. In my original post I eluded to more energetic events like hypernova but forgot to discuss them. Typically, hypernova are not much more than ~50 times as energetic as supernova, would not alter the above distances much. Gamma-ray bursts, again have comparable total energy releases, but here the energy is focused into a relatively narrow beam rather than spherical. Even so, the beam would need to be focused directly on Earth and the source relatively close to evaporate and/or ionize the Earth's atmosphere. I should also point out that a significant fraction (in some cases, nearly all the energy) of the energy in a supernova goes to neutrinos, which do effectively nothing to our atmosphere. Thus, the above values are grossly underestimated. Meaning, a supernova (or other huge energy release) would need to be significantly closer to cause the same effects. What I did not mention is that the entire atmosphere need not be ionized or evaporated for there to be significant problems. Simply ionizing a significant fraction of the $N_{2}$ could produce damaging levels of $NO_{x}$'s that lead to acid rain and other polluting effects. Further, a significant enhancement in the level of ionizing radiation could damage enough of the ozone layer to lead to large scale crop failures. Though an atmospheric chemist/physicist would be better suited to estimate the minimum safe distance for these effects. honeste_viverehoneste_vivere $\begingroup$ TY for this great answer $\endgroup$ – user46925 Feb 1 '16 at 20:58 $\begingroup$ While this is a great answer, is removing the atmosphere the mechanism by which a supernova would kill? E.G. raising the temperature of the atmosphere by 50 C would likely do it. Or raising the temperature of the ground. These would take much less energy than baking away the atmosphere. There may be other mechanisms. $\endgroup$ – mmesser314 Feb 28 '16 at 13:35 $\begingroup$ Just want to point out that the errors you would incur by treating the atmosphere as entirely nitrogen are completely irrelevant when making order of magnitude estimates like this. $\endgroup$ – DanielSank Mar 3 '16 at 23:24 $\begingroup$ No mistake. I'm saying tha tthe whole analysis where you include the various different atmospheric gasses is not needed. $\endgroup$ – DanielSank Mar 4 '16 at 16:08 $\begingroup$ It bears noting that Proxima Centuri, the closest star to the solar system, is about 1.3 parsecs away, and that the inert gas Argon is not ecologically very important to life on Earth. There are at least 56 star systems, however, within 5 parsecs, so the CO2 number is the one that matters. en.wikipedia.org/wiki/List_of_nearest_stars_and_brown_dwarfs A recent evaluation of the risk of a extinction causing gamma ray burst risk can be found at arxiv.org/abs/1609.09355 $\endgroup$ – ohwilleke Oct 14 '16 at 5:32 According to Phil Plait and others, anything over 100 light years (and probably a fair bit closer) should be safe. There aren't any known supernova candidates that close. http://earthsky.org/space/supernove-distance https://twitter.com/BadAstronomer/status/201708339904778240 SN 1987A isn't even in our galaxy. It's over 150,000 light years distant. BowlOfRedBowlOfRed $\begingroup$ Twitter content as a reference? Come on. The other linked article gives no calculations or citations to literature. $\endgroup$ – Rob Jeffries Jan 28 '16 at 8:16 protected by Qmechanic♦ Feb 1 '16 at 21:42 Could planets survive their star becoming a black hole? What is the speed of sound in space? Why doesn't the solar wind disrupt the planets? How do stars from far away affect Earth? Do nearby gamma ray busts/supernova damage more than just the ozone layer? What frequencies of em radiation can ionize air? How powerful of an electric current would I need to fully ionize plasma under the specific circumstances below? Do coronal mass ejections and solar flares diffuse over long distances? What happens to the neighboring star of a type Ia supernova? Near-Earth supernova How would a very nearby supernova shockwave and remnants affect the Earth? What is the percentage of stars that are massive enough to end their lives in a supernova? Can astrophysical events outside the solar system lead to global warming of Earth? Supernova explosions Finding the remnants of recent supernova explosions in the solar system's neighbourhood Did Supernova 2007bi really explode due to antimatter creation? How long would it take for Earth to become desolate after being slung from orbit?
CommonCrawl
A cube of side 3 inches has a cube of side 1 inch cut from each corner. A cube of side 2 inches is then inserted in each corner. What is the number of square inches in the surface area of the resulting solid? Our initial cube has 6 faces with 9 square inches of surface area each for a total of 54 square inches. When we cut away the 8 cubes of side length one, we remove 3 square inches of surface area for each one for a total of 24 square inches of surface area lost. We then add a 2 inch cube to each corner for a total of 8 more cubes. A 2 inch cube has a surface area of 24 but each of these cubes is missing 3 $\text{in}^2$ of surface area, so the total surface area is $54-24+8(24-3)=\boxed{198}$ square inches.
Math Dataset
\begin{document} \onecolumn \firstpage{1} \title[Determining the ground-state probability $\ldots$]{Determining the ground-state probability of a quantum simulation with product-state measurements} \author[Bryce Yoshimura {et~al.} ]{Bryce Yoshimura\,$^{1,*}$ and J. K. Freericks\,$^{1}$ } \address{} \correspondance{} \extraAuth{} \maketitle \begin{abstract} \section{} One of the goals in quantum simulation is to adiabatically generate the ground state of a complicated Hamiltonian by starting with the ground state of a simple Hamiltonian and slowly evolving the system to the complicated one. If the evolution is adiabatic and the initial and final ground states are connected due to having the same symmetry, then the simulation will be successful. But in most experiments, adiabatic simulation is not possible because it would take too long, and the system has some level of diabatic excitation. In this work, we quantify the extent of the diabatic excitation even if we do not know {\it a priori} what the complicated ground state is. Since many quantum simulator platforms, like trapped ions, can measure the probabilities to be in a product state, we describe techniques that can employ these measurements to estimate the probability of being in the ground state of the system after the diabatic evolution. These techniques do not require one to know any properties about the Hamiltonian itself, nor to calculate its eigenstate properties. All the information is derived by analyzing the product-state measurements as functions of time. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} Quantum simulation, ion trap, adiabatic state preparation, transverse field Ising model, ground state probability } \end{abstract} \section{Introduction} The Hilbert space that describes a strongly correlated many-body quantum system grows exponentially in the number of particles $N$, so determining the ground state of a complex many-body quantum system becomes numerically intractable when the size of the quantum system becomes too large to be represented on a classical computer (unless there is some other simplification, like weak entanglement, {\it etc.}). The idea to simulate complex many-body quantum systems on a quantum computer was proposed by Feynman in the 1980's~\cite{feynman1981}. Since Feynman's proposal quantum alogorithms have been proposed to calculate eigenvalues and eigenvectors of these intractable systems~\cite{lloyd1999}.One of the challenges with creating the ground state on a quantum computer, say by adiabatic evolution of the ground state from a simple to a complex one, is how do we determine the extent of the ground-state preparation. After all, we don't know what the ground state is {\it a priori} so it is difficult to know what the probability to be in the ground state is. In this work, we propose one method to determine the probability to remain in the ground state. While this analysis is applied to ion-trap emulators (that model interacting spin systems), the general discussion can be applied to any quantum computer that performs ground state preparation, but creates diabatic excitations as a result of a too rapid time evolution. To date, trapped-ion quantum simulators have seen success in two different platforms: the Penning trap has trapped $\approx 300$ ions in a planar geometry and generated Ising spin interactions~\cite{britton2012} and the linear Paul trap has performed quantum simulations with $18$ ions in a one-dimensional crystal~\cite{senko2013}. The success of these traps as quantum simulators is attributed to their long coherence times, precise spin control, and high fidelity. Here, we will focus on the linear Paul trap quantum simulator. In an ion trap quantum simulator, hyperfine states of the trapped ions are used for the different spin states (for simplicity, we can consider only two states, and hence, a spin-one-half system). Optical pumping can be employed to create a product state with the ions all in one of the two hyperfine states with fidelities close to 100\%. A coherent rotation of that state can then be used to create a wide range of different initial product states. By turning on a large magnetic field, this state can configured to be the ground state of the system. Then the magnetic field is reduced slowly enough for the system to remain in the ground state until the system evolves into the complex spin Hamiltonian in zero magnetic field. The challenge is that the evolution of the system must be completed within the coherence time of the spins, which often is too short to be able to maintain adiabaticity throughout the evolution (and indeed, becomes increasingly difficult as the system size gets larger). We propose to employ the time evolution of an observable, $\mathcal{O}(t)=\langle\psi(t)|\hat{\mathcal{O}}|\psi(t)\rangle$ (with $|\psi(t)\rangle$ the quantum state of the system at time $t>t_{stop}$), after the evolution to the final Hamiltonian, to measure the absolute probability of the ground state. The time evolution of the observable, $\mathcal{O}(t)$, oscillates at frequencies given by the energy differences between the final eigenstates (where the Hamiltonian becomes time independent). More concretely, let $\hat{\mathcal{H}}(t_{stop}) |m\rangle = E_m|m\rangle$, then the time-dependent expectation value satisfies \begin{equation} \mathcal{O}(t>t_{stop}) = \sum_{mn} P_m^* P_n \langle m|\hat{\mathcal{O}} |n\rangle \exp[-i(E_n - E_m)t], \label{eq:oscillations} \end{equation} where $P_m = \langle m| \psi(t_{stop}) \rangle $ is the overlap of the state $| \psi(t_{stop}) \rangle$ with the eigenstate $|m\rangle$ (we have set $\hbar=1$). Previously, we showed how Fourier transforming the time series and employing signal processing methods like compressive sensing, allows one to extract the energy differences as a type of many-body eigenstate spectroscopy~\cite{shabani2011, spectro_us}. Here, we focus on the amplitude of the oscillations, given by $P^*_1P_n$, which is proportional to the probability amplitude of the ground state ($P_1$), if the ground state still has a high probability in $|\psi(t_{stop})\rangle$. Note that we do not need to know the explicit ground-state wavefunction to extract its probability from these oscillations. This is the main advantage of this technique. We illustrate how the the ground state probability can be extracted by analyzing the amplitude of the oscillations of the simplest time-dependent Hamiltonian: the two-level Landau-Zener problem. The Landau-Zener problem is defined via \begin{equation} \hat{\mathcal{H}}(t) = B^{z}(t) \sigma^{z} + \sigma^{x}. \end{equation} Here $\sigma^{\alpha}$ are the Pauli spin operators in the $\alpha = x$, $y$, or $z$ direction. The Pauli spin operators have the commutation relation \begin{equation} \left[\sigma^{\alpha}, \sigma^{\beta} \right] = 2i \epsilon_{\alpha \beta \gamma} \sigma^{\gamma}, \label{eq:spincommutation} \end{equation} where the Greek letters represent the spatial directions and $\epsilon_{\alpha \beta \gamma}$ is the antisymmetric tensor. The Landau-Zener problem has a minimum energy gap occurring when $B^{z}(t) = 0$, as shown in Fig.~\ref{fig:lzenergy}. \begin{figure}\label{fig:lzenergy} \end{figure} Since the Landau-Zener problem is a two state system, the probabilities, $P_1$ and $P_2$, are related by $P_2^2 = 1 - P_1^2$ and the state, $|\psi(t_t) \rangle$, can be represented by $P_1 = \cos(\phi)$ and $P_2 = \sin(\phi)$. Using this state in Eq.~(\ref{eq:oscillations}), we find that the expecation value $\mathcal{O}(t > t_{stop})$ becomes (neglecting terms with no time dependence) \begin{equation} \mathcal{O}(t>t_{stop}) = \cos(\phi)\sin(\phi) \left \{ \langle 1|\hat{\mathcal{O}} |2\rangle \exp[-i(E_2 - E_1)t] + \langle 2|\hat{\mathcal{O}} |1\rangle \exp[-i(E_1 - E_2)t] \right \}. \label{eq:LZoscillations} \end{equation} The ground-state probability [$\cos^2(\phi)$] can then be calculated if $\langle 1|\hat{\mathcal{O}} |2\rangle$ is known. The amplitude of the oscillations is $\sin( 2\phi )\langle 1|\mathcal{O}|2\rangle$. However, even if $\phi$ is extracted from the amplitude, there are two always solutions, except when $\phi = \pi/4$ (see Fig.~\ref{fig:exact}), and hence two possible ground state probabilities. In Fig.~\ref{fig:exact} we show both the amplitude of the oscillations and the ground-state probability as a function of $\phi$, where the dashed line shows that the amplitude is not unique to a single ground-state probability. However, once the system has more states, one does not have a simple closed set of equations and the analysis of the amplitude of the oscillations can only deduce the ground-state probability when the ground-state amplitude is dominant in $|\psi(t)\rangle$. We demonstate this below with the transverse field Ising model. \begin{figure}\label{fig:exact} \end{figure} It is well known that the amount of diabatic excitation in the Landau-Zener problem increases the faster the magnetic field is ramped from $-\infty$ to $+\infty$. The general protocol that we employ is as follows (and is depicted schematically in Fig.~\ref{fig:lzschematic}): \begin{enumerate} \item Initialize the state in an arbitrary state, in the following examples we initialize the state in the ground state of the Hamiltonian with a large polarizing magnetic field. \item Decrease the magnetic field as a function of time to evolve the quantum state, as shown in Fig.~\ref{fig:lzschematic}(A), where, for concreteness, we show an example of a magnetic field that changes linearly. \item Hold the magnetic field at its final value which is first reached at $t = t_{stop}$ until the measurement is performed at the time interval $t_{meas.}$ after the field has been held constant [see Fig.~\ref{fig:lzschematic}(A)]. \item Measure an observable of interest, $\mathcal{O}(t)$, for a number of different $t_{meas.}$ values. \item Determine the amplitude of the oscillations. \end{enumerate} Note that one requirement of this approach is that the observable of interest must oscillate as a function in time as given in Eq.~(\ref{eq:oscillations}). The amplitude is extracted from the first maximum and minimum of the observable as a function of time by \begin{equation} Amplitude = \frac{ \text{max}[\mathcal{O}(t)] - \text{min}[\mathcal{O}(t)]}{2} \end{equation} \begin{figure}\label{fig:lzschematic} \end{figure} The time evolution of the wavefunction $|\psi(t)\rangle$ is calculated by solving the time-dependent Schr\"odinger equation: \begin{equation} i\frac{\partial}{\partial t} | \psi(t) \rangle = \hat{\mathcal{H}}(t) | \psi(t) \rangle \end{equation} by using the Crank-Nicolson method to time evolve the state $| \psi (t)\rangle$. This technique solves the problem with the following approach~\cite{cranknicolson} \begin{equation} \left( \hat{ \mathbb{I}} + i \frac{\delta t}{2} \hat{ \mathcal{ H } }( t + \delta t) \right) | \psi (t + \delta t) \rangle = \left( \hat{\mathbb{I}} - i \frac{\delta t}{2} \hat{ \mathcal{ H } }( t ) \right) | \psi (t ) \rangle. \end{equation} Note that the Hamiltonian is time-dependent until $t_{stop}$ is reached, when it becomes constant in time. We present a numerical example to illustrate this protocol by analyzing the oscillations for the Landau-Zener problem. Due to the fact that the eigenstates for the Landau-Zener problem at $|B^{z}(t_{stop})| \gg 1$ approach the eigenstates of the $\sigma^z$ operator, if one measures an operator that is diagonal in this basis, there will be no oscillations in the expectation value. Hence, we measure the expectation value of the operator $\hat{\mathcal{O}}(\theta)$, the Pauli spin matrix that points in the $\theta$ direction. \begin{equation} \hat{\mathcal{O}}(\theta) = R^{\dagger}(\theta) \sigma^{z} R(\theta), \end{equation} where $R(\theta)$ is the global rotation about the $y$-axis and is given by \begin{equation} R(\theta) = \hat{\mathbb{I}} \cos\left(\frac{\theta}{2} \right) + i \sigma^{y} \sin\left(\frac{\theta}{2} \right), \end{equation} where $\theta = \pi/2$ produces $\hat{\mathcal{O}}(\theta=\pi/2)= \sigma^{x}$. \begin{figure}\label{fig:lzsignal} \end{figure} For our numerical examples with the Landau-Zener problem, we use a linear ramp, $B^{z}(t) = \tau t + B_0$, where $B_0 < 0$. $|B_0|$ is chosen to be large in comparison to 1 to polarize the spin. We evolve the state to $t_{stop}$, such that $B^{z}(t_{stop}) \gg 1$. We present the time evolution for $4$ different $\tau = 1.5$, $3.25$, $5.0$, and $9.0$ for $3$ different $\theta = \pi/9$, $\pi/3$, and $\pi/2$ in Fig.~\ref{fig:lzsignal}. The amplitude of the oscillations becomes $1$ when $\theta = \pi/2$. When $\tau=5.0$ the amplitude of the oscillations is maximized in comparison to the $3$ other $\tau$'s. In Fig.~\ref{fig:lzamplitude}, we show the probability of the ground state compared to the amplitude of the oscillations as a function of $\tau$. The amplitude of the oscillations increases as $\theta$ is increased. This is due to the $\sigma^{x}$ term dominating the $\hat{\mathcal{O}}(\theta)$ operator instead of the $\sigma^{z}$ term. When the ground state probability approaches $0.5$ the amplitude of the oscillations is maximal and it decreases either when the ground state probability increases or decreases. As the probability of ground state approaches $1$ the amplitude is expected to become $0$, which can be obscured due to experimental noise. In order to determine which side of the maximum the measurement of the amplitude is on, multiple $\tau$ experiments must be run to track the depletion of the ground state as the amplitude of the oscillations reaches a maximum and then decreases. Note further that in this case, since there is only one excited state, one can, in principle, always determine the ground-state probability by measuring the amplitude of the oscillation and extracting the appropriate probability amplitude. For more complex systems, such a procedure will not be possible, but the monotonic nature of the curve (at least while the probability for the ground state remains above 50\%) will allow us to determine whether a given run of the experiment increases the probability to be in the ground state, which can be employed to optimize the ground-state preparation if it is done with some alternative quantum control method besides adiabatically evolving the system. Indeed, we believe this has the potential to be the most important application of this approach. \begin{figure}\label{fig:lzamplitude} \end{figure} This simple example shows us a number of important points. First, one may need to rotate the measurement basis if the final product basis are eigenvectors of the Hamiltonian. Second, as the ground state is depleted, the amplitude of the oscillations grows until it reaches a maximum, when the system is equally populating both eigenstates. If the ground state is further depleted, the amplitude of the oscillations will decrease. One can make a mistake in estimating the probability in the ground state if one does not know which side of the curve one is on (probability of the ground state below or above 50\%). On the other hand, if one knows which side of the curve one is on, due to making measurements at earlier times to track the ground-state depletion, then one might be able to further use the amplitude to determine the ground state probability in the Landau-Zener problem. When we change to the ion-trap system and examine the transverse-field Ising model, then the procedure becomes more complicated because there are more states that the ground state can be depleted into, and this complicates the analysis. \section{Transverse field Ising model} Now we describe a more realistic case of the transverse field Ising model. The transverse field Ising model for $N$ particles is given by \begin{equation} \hat{\mathcal{H}}(t) = -J_{\pm} \sum^N_{i < j} J_{ij}\sigma^{z}_i \sigma^{z}_j - B^{x}(t)\sum^N_{i=1} \sigma^{x}_i, \label{eq:ComHam} \end{equation} where the $J_{ij}$ are the spin-spin interactions produced by a spin-dependent force and given by~\cite{monroe_duan} \begin{equation} J_{ij} = \Omega^2\nu_R \sum_{\nu =1}^N \frac{b^*_{i\nu} b_{j\nu}}{\mu^2 - \omega^2_{\nu}}, \label{eq:interaction} \end{equation} where $b_{i\nu}$ is the normalized eigenvector of the $\nu^{\rm th}$ phonon modes, $\omega_{\nu}$ is the corresponding frequency of the phonon mode, $\Omega$ is the single spin flip Rabi frequency, and $\nu_R$ is the recoil frequency associated with the dipole force on an ion, from which we define our energy units with $J_0 = \Omega^2\nu_R$ . The Raman beatnote frequency $\mu$ is tuned to the blue of the largest $\omega_{\nu}$ (which here is the center-of-mass phonon, $\omega_{COM}$). The details of calculating the $J_{ij}$ can be found elsewhere ~\cite{three_ion}. The $J_{ij} \propto | r_{ij}|^{-\alpha} $ with $r_{ij}$ the interparticle distance and the exponent $\alpha$ being tunable between 0 and 3. The exponent $\alpha$ is tuned by changing $\mu$ or by changing the ratio of the longitudinal to the transverse trap frequencies. Here, we study the ferromagnetic interaction of the Ising model with $J_{\pm} > 0$. The Pauli spin matrices are now associated with each lattice site. The $J_{ij}$ of the transverse Ising model have spatial-reflection symmetry such that $J_{ij} = J_{ji}$ and the eigenstates have the same symmetry. The eigenstates of the transverse field Ising model also have spin-reflection parity, that is, under the partial inversion transformation $\sigma^{x} \rightarrow \sigma^{x}$, $\sigma^{y} \rightarrow -\sigma^{y}$, and $\sigma^{z} \rightarrow -\sigma^{z}$. The spin-reflection parity and spatial-reflection symmetry produce avoided crossings between eigenstates with the same parity and symmetry, such that a minimum energy gap to the lowest coupled state occurs as shown in Fig.~\ref{fig:isingenergy}. \begin{figure}\label{fig:Isingcartoon} \end{figure} The experimental protocol is essentially the same as before, except for a few differences. The first difference being that the transverse magnetic field now depends exponentially as a function of time and is given by \begin{equation} B^{x}(t) = B_o \exp^{-\frac{t}{\tau}}, \end{equation} as shown in Fig.~\ref{fig:Isingcartoon}(A). The second difference is that we will perform two different experimental protocols where the initial state is evolved to $t_{stop} = 6\tau$ and before the time interval $t_{meas.}$ starts, the magnetic field is quenched to zero $B^{x}(t_{meas.})=0$, as shown in Fig.~\ref{fig:Isingcartoon}(A), or the transverse magnetic field is held at its final value which is first reached at $t = t_{stop}$, as depicted in Fig.~\ref{fig:Isingcartoon}(B). We work with parameters for the ion chain where the exponent $\alpha\approx 1$. The energy spectra are plotted in Fig.~\ref{fig:isingenergy}. \begin{figure}\label{fig:isingenergy} \end{figure} There are a number of additional complications. First off, the eigenstates at $B^x=0$ are product states along the $z$ direction, hence we need to rotate again to see the oscillations. We choose $\hat{\mathcal{O}}(\theta)$ to be the average magnetization in the $\theta$-direction \begin{equation} \hat{\mathcal{O}}(\theta) = \frac{1}{N} R^{\dagger}(\theta) \sum_{i=1}^N \sigma_i^{z} R(\theta), \end{equation} where $R(\theta)$ is now the global rotation given by \begin{equation} R(\theta) = \prod_{i=1}^N \left [ \hat{\mathbb{I}} \cos\left(\frac{\theta}{2} \right) + i \sigma^{y}_i \sin\left(\frac{\theta}{2} \right) \right ] , \end{equation} where $\theta = \pi/2$ yields $\sigma^{x}_{\rm tot}$. Measuring the average magnetization in the $\theta$-direction produces the needed oscillations. \begin{figure}\label{fig:isingsignal} \end{figure} We next show simulated data for the transverse field Ising model with $J_{\pm} = 1$ and $J_0 = 1$kHz. The parameters for the $J_{ij}$ are $\mu = 1.0219 \omega_{COM}$ and the antisymmetric ratio of the trap frequencies is $0.691/4.8$ which results in an $\alpha \approx 1.0$. The initial state is evolved to $t_{stop} = 6\tau$ and before the time interval $t_{meas.}$ the field is quenched to zero, as shown in Fig.~\ref{fig:Isingcartoon}(A). In Fig.~\ref{fig:isingsignal}, we show the time evolution of $\mathcal{O}(\theta)$ with $\theta = \pi/9$, $\pi/3$, and $\pi/2$ for $3$ different $\tau J_0 =0.2$, $0.4$, and $0.6$. The amplitude of the oscillations follow a similar trend to Fig.~\ref{fig:lzsignal}, such that at $\tau =0.4$ the amplitude of the oscillations are at a maximum in comparison to other $\tau$'s. Additionally, as $\theta$ is increased to $\pi/2$ the amplitude of the oscillations increase as previously seen in the Landau-Zener example. In Fig.~\ref{fig:isingamplitude}, we compare the probability of the ground state to the amplitude as a function of the ramping $\tau$. In general, the amplitude of the oscillations is maximized near $\tau = 0.4$ when the ground state probability is $\approx 0.61$ and the amplitude decreases as the probability to be in the ground state either increases or decreases. Similar to the Landau-Zener problem, as the probability to be in the ground state increases to $1$ the amplitude will decrease to $0$. However when the ground state probability approaches $0.5$, the amplitude of the oscillations is at a local minimum. As previously seen in the Landau-Zener example, a single measurement of the amplitude ramped at $\tau$ cannot determine whether the probability of the ground state is high or low. Hence, the probability needs to be tracked by using a series of measurements. \begin{figure}\label{fig:isingamplitude} \end{figure} In general, the analysis of the amplitude can be done for different $t_{stop}$'s in which the transverse magnetic field is held constant at the strength of $B^x(t_{stop})$, shown in Fig.~\ref{fig:Isingcartoon}(B). Fig.~\ref{fig:isingrunning} shows the amplitude of the oscillations as a function of $B^x(t_{stop})$ for $3$ different $\tau = 0.2$, $0.4$, and $0.6$. The amplitude of the oscillations increases as the transverse magnetic field approaches the minimum energy gap and the excitations are created from the ground state, depending on the $\tau$. Once past the minimum energy gap, the ground-state probability increases as de-excitations occur and conversely the amplitude of the oscillations decrease as well. However a similar response will occur if excitations are being created after the minimum energy gap. Unfortunately, the analysis of the amplitude will not distinguish between these two possibilities. \begin{figure}\label{fig:isingrunning} \end{figure} \section{Conclusion} In this work, we have proposed to analyze the amplitude of the oscillations for a given time-dependent Hamiltonian that is held constant for a time interval $t_{meas.}$ to extract information about the ground-state probability. We demonstrated this analysis for the Landau-Zener problem and for the transverse field Ising model (as would be simulated in the linear Paul trap). In both of the Hamiltonians, the amplitude of the oscillations becomes a maximum at a particular probability of the ground state and decreases as the ground state probability either increases or decreases. Hence a single measurement of the amplitude cannot determine which side of the maximum one is on. Therefore multiple measurements are needed to be made where the amount of excitations are varied. Additionally as the probability of the ground state is approaching $1$, the amplitude decreases to $0$ which can be difficult to measure given experimental noise. In this work, we have described the simplest analysis one can do to extract information about the probability of the ground state. This approach can be refined by using signal processing techniques like compressive sensing~\cite{donoh2006} to determine the Fourier spectra of the excitations. By monitoring the change of the weights of the delta functions, one can produce more accurate quantitative predictions for the probability of the ground state, because we can directly measure $P_1^*P_m$ for a few different $m$ values. But this goes beyond the analysis we have done here. For the transverse field Ising model, de-excitations are observed, and were reflected in the amplitude of oscillations. However, after the minimum energy gap, more diabatic excitation can be created, but it is difficult to distinguish between the de-excitations and excitations. One interesting aspect is that as long as the ground-state probability remains high enough, measuring the height of the oscillation amplitude can be used to optimize the ground-state probability as a function of parameters used to determine the time-evolution of the system. This can be a valuable tool for optimizing the adiabatic state preparation protocol over some set of opimization parameters. \section*{Author Contributions} JF and BY contributed equally to this manuscript. \section*{Acknowledgments} J. K. F. and B. T. Y. acknowledge support from the National Science Foundation under grant number PHY-1314295. J. K. F. also acknowledges support from the McDevitt bequest at Georgetown University. B.T. Y. acknowledges support from the Achievement Rewards for College Students Foundation. \end{document}
arXiv
\begin{document} \title{COMPLEX LAGRANGIAN EMBEDDINGS OF\\[10pt] MODULI SPACES OF VECTOR BUNDLES} \author{U. Bruzzo \& \ F. Pioli} \thanks{E-Mail addresses: {\tt [email protected]}, {\tt [email protected]}.} \keywords{Moduli spaces of stable bundles, Fourier-Mukai transform, Complex Lagrangian submanifolds.} \subjclass{14D20, 14J60, 53C42} \maketitle \begin{center}\baselineskip=12pt \par\vskip-4mm\par {Scuola Internazionale Superiore di Studi Avanzati } \par {(SISSA), Via Beirut 2-4, 34014 Trieste, Italy} \end{center} \par \par \begin{quote}\footnotesize\baselineskip=14pt {\sc Abstract.} By means of a Fourier-Mukai transform we embed moduli spaces ${\mathcal M}_C(r,d)$ of stable bundles on an algebraic curve $C$ of genus $g(C)\ge 2$ as isotropic subvarieties of moduli spaces of $\mu$-stable bundles on the Jacobian variety $J(C)$. When $g(C)=2$ this provides new examples of special Lagrangian submanifolds. \end{quote} \par\addvspace{8mm}\par \section{Introduction} Throughout this paper we shall fix ${\mathbb C}$ as the ground field. Let $C$ be a smooth algebraic curve of genus $g>1$, and denote by $J(C)$ its Jacobian variety and by $\Theta\in H^2(J(C),\Z)$ the cohomology class corresponding to the theta divisor. Fix coprime positive integers $r$, $d$ such that $d>2rg$, and let ${\mathcal M}_C(r,d)$ be the moduli space of stable vector bundles on $C$ of Chern character $(r,d)$. We show that ${\mathcal M}_C(r,d)$ can be embedded as an isotropic holomorphic submanifold of the complex symplectic variety ${\mathcal M}^\mu_{J(C)}(r,d)={\mathcal M}_{J(C)}^\mu(d+r(1-g),-r\Theta,0,\dots,0)$ --- the moduli space of $\mu$-stable vector bundles on $J(C)$ with Chern character $(d+r(1-g),-r\Theta,0,\dots,0)$ (cf. Theorem \ref{tiram} for a precise statement). When $g(C)=2$ one has $\dim{\mathcal M}^\mu_{J(C)}(r,d)=2\dim{\mathcal M}_C(r,d)$, and by using the hyper-K\"ahler structure of ${\mathcal M}^\mu_{J(C)}(r,d)$, one can choose on this space a complex structure such that ${\mathcal M}_C(r,d)$ embeds as a special Lagrangian submanifold, thus providing new examples of such objects. We recall a few facts about the Fourier-Mukai transform in the context of Abelian varieties \cite{Mu1}. Let $X$ be an Abelian variety and $\widehat{X}=\operatorname{Pic}^0 (X)$ its dual variety. Let $\mathcal {P}$ be the normalized Poincar\'e bundle on $X\times \widehat{X}$. The Mukai functor is defined as \begin{gather*} \mathbf {R} \mathcal {S} \colon D(X)\to D(\widehat{X})\\ \mathbf {R} \mathcal {S} (-) =\mathbf {R}\pi_{\widehat{X},\ast } (\pi_X^\ast (-)\otimes \mathcal {P}) \end{gather*} where $D(X)$ and $D(\widehat{X})$ are the bounded derived categories of coherent sheaves on $X$ and $\widehat{X}$, respectively. Mukai has shown that the functor $\mathbf {R} \mathcal {S}$ is invertibile and preserves families of sheaves (cf. \cite{Mu1,Mu3}). If $E$ is a $\text{WIT}_i$ sheaf on $X$, that is, a sheaf whose transform is concentrated in degree $i$, then the functor $\mathbf {R} \mathcal {S}$ preserves the Ext groups: $$\operatorname{Ext}^j_X (E,E) \cong \operatorname{Ext}^j_{\widehat {X}} (\hat E,\hat E) \quad \text{for every } j, $$ where $\hat E$ indicates the transform of $E$. Let $C$ be a smooth projective curve of genus $g> 1$ and $J(C)$ the Jacobian of $C$. If we fix a base point $x_0$ on $C$, and let $\alpha_{x_0} \colon C\to J(C)$ be the Abel-Jacobi embedding given by $\alpha_{x_0}(x)={\mathcal{O}}_C (x-x_0)$, the normalized Poincar\'e bundle $\mathcal {P}_C$ on $C\times J(C)$ is the pullback of the Poincar\'e bundle on $J(C)\times J(C)$, where we identify $J(C)$ with $\widehat{J(C)}$ via the isomorphism $-\phi_\Theta \colon J(C) \to \widehat{J(C)}$. The Poincar\'e bundle on $C\times J(C)$ gives rise to a derived functor (which is not invertible): \begin{gather*} \mathbf {R} \Phi_C\colon D(C)\to D(J(C))\\ \mathbf {R} \Phi_C (-) =\mathbf {R}\pi_{J(C),\ast } (\pi_C^\ast (-)\otimes \mathcal {P}_C)\,. \end{gather*} Since $\alpha_{x_0}$ is a closed immersion we have a natural isomorphism of functors \begin{equation}\label{relafond} \mathbf {R} \Phi_C \cong \mathbf {R} \mathcal {S} \circ \alpha_{x_0,\ast }. \end{equation} Thus the study of the transforms of bundles $F$ on $C$ with respect to $\mathbf {R} \Phi_C$ is equivalent to studying the transforms of sheaves of pure dimension $1$ of the form $\alpha_{x_0,\ast } (F)$ with respect to $\mathbf {R} \mathcal {S}$. We recall the following fact which is proven in \cite{Li}. \begin{prop} If $E$ is a stable bundle on $C$ of rank $r$ and degree $d$ such that $d>2rg$, then $E$ is {\rm WIT}$_0$, and the transformed sheaf $\hat E = \mathbf {R}^0 \Phi_C (E)$ is locally free and $\mu$-stable with respect to the theta divisor on $J(C)$. \end{prop} \par\addvspace{8mm}\par \section{Complex Lagrangian embeddings} If we consider the moduli space ${\mathcal M}_C(r,d)$ of stable bundles of rank $r$ and degree $d$ on a projective smooth curve of genus $g>1$ such that $d>2rg$ and $r,d$ are coprime, the functor $\mathbf {R} \Phi_C$ gives rise to an injective morphism $$\tilde \jmath \colon {\mathcal M}_C(r,d) \to {\mathcal M}^\mu_{J(C)}(r,d)= {\mathcal M}^{\mu}_{J(C)}(d+r(1-g),-r\Theta,0,\dots,0)$$ where the sheaves in ${\mathcal M}_{J(C)}^\mu(r,d)$ are stable with respect to the polarization $\Theta$. Before studying the morphism $\tilde \jmath$ we need to recall some elementary facts about the Yoneda product of Ext groups. Let $\mathcal A$ be an abelian category with enough injectives. The elements of $\operatorname{Ext}^1_\mathcal A (E,E)$ are identified with equivalence classes of exact sequences $0\to E \to F\to E \to 0$ with respect to the usual relation. This can be generalized to the groups $\operatorname{Ext}^2_\mathcal A (E,E)$ as follows. We refer to \cite{HiSt} for proofs and details. Consider the following commutative diagram with exact rows: \begin{equation} \xy \xymatrix { E:\quad 0 \ar[r] &B \ar[d]^{\operatorname{Id}_B} \ar[r]& G_1 \ar[d] \ar[r] &G_2 \ar[d]\ar[r] & A \ar[d]^{\operatorname{Id}_A}\ar[r]& 0 \\ E^\prime :\quad 0 \ar[r]& B \ar[r]& G^\prime_1 \ar[r]& G^\prime_2 \ar[r] & A\ar[r] &0. } \endxy \label{triext} \end{equation} We write $E \twoheadrightarrow E^\prime$ when such a diagram holds. The relation $\twoheadrightarrow$ is not symmetric, but it generates the following equivalence relation: $E\sim E^\prime$ if and only if there exists a chain of sequences $E_0, E_1, \dots, E_k$ such that $$E=E_0 \twoheadrightarrow E_1 \twoheadleftarrow E_2 \twoheadrightarrow \dots \twoheadleftarrow E_k = E^\prime.$$ Let $\operatorname{Yext}_\mathcal A^2 (-,-)$ the set of such equivalence classes. There is an isomorphism $$ \operatorname{Yext}_\mathcal A^2 (-,-)\cong \operatorname{Ext}^2_\mathcal A (-,-).$$ {}From now on we shall identify the above groups. Observe that the identity of $\operatorname{Ext}_\mathcal A^2 (A,B)$ is given by the class of the sequence $$0\lfd B\stackrel{\operatorname{Id}_B}\lfd B \stackrel{0}\lfd A \stackrel{\operatorname{Id}_A}\lfd A \lfd 0 .$$ Moreover the Yoneda product $$\operatorname{Ext}^1_\mathcal A (B,A) \times \operatorname{Ext}^1_\mathcal A (A,C) \to \operatorname{Ext}^2_\mathcal A(B,C) $$ is obtained in the following way: let $E$ and $E^\prime $ be two elements of $\operatorname{Ext}^1_\mathcal A (B,A)$ and $\operatorname{Ext}^1_\mathcal A (A,C)$ represented respectively by the sequences \begin{equation*} E: \quad 0\lfd A\stackrel{\nu}\lfd F \stackrel{p}\lfd B \lfd 0 \end{equation*} \begin{equation*} E^\prime:\quad 0\lfd C\stackrel{i}\lfd G \stackrel{\lambda}\lfd A \lfd 0. \end{equation*} Then the class of the exact sequence \begin{equation*} 0\lfd C\stackrel{i}\lfd G \stackrel{\nu\circ \lambda}\lfd F \stackrel{p}\lfd B \lfd 0 \end{equation*} in $\operatorname{Ext}_\mathcal A^2 (B,C)$ is the image of $E$, $E^\prime$ with respect to the Yoneda product. We shall also need to introduce a moduli space of stable sheaves in Simpson's sense \cite{Simp}. For simplicity we denote the Abel-Jacobi map as $j\colon C\to J(C)$. Observe that if $E$ is a stable bundle on $C$ then $j_\ast(E)$ is a stable sheaf of pure dimension 1 on $J(C)$ with respect to the polarization $\Theta$. Let ${\mathcal M}_{J(C)}^{\hbox{\tiny pure}}(r,d)$ be the moduli space of all stable pure sheaves on $J(C)$ with Chern character $(0,\dots,0,r\Theta,d+r(1-g))$. If $\mathcal {E}$ is a flat family of vector bundles on $C$ parametrized by a Noetherian scheme $S$, then $j_{S,\ast}(\mathcal {E})$ is a flat family of sheaves on $J(C)\times S$ over $S$, where $j_S\colon C\times S\to J(C)\times S $ is the embedding $j\times \operatorname{Id}_S$. Therefore one has a morphism of moduli spaces \begin{equation}j_\ast : {\mathcal M} (r,d) \to {\mathcal M}_{J(C)}^{\hbox{\tiny pure}}(r,d)\,. \label{am}\end{equation} \begin{lemma} The morphism $\tilde \jmath \colon {\mathcal M}_C(r,d) \to M^{\mu}_{J(C)}(r,d)$ is an immersion (i.e., its tangent map is injective). \end{lemma} \begin{proof} From the isomorphism given by Eq.~(\ref{relafond}) and recalling that the transform $\mathbf {R}\mathcal {S}$ preserves the $\operatorname{Ext}$ groups of WIT sheaves, it is enough to show that the same claim holds for the morphism (\ref{am}). By the very definition of the Kodaira-Spencer map, the tangent map to $j_\ast$ may be identified with the map $$\operatorname{Ext}^1_{C} (E,E)\stackrel{\phi}\hookrightarrow \operatorname{Ext}^1_{J(C)} (j_\ast(E),j_\ast (E))$$ obtained in the following way. Let \begin{equation}\label{seqa} A: \quad 0\lfd E \lfd F \lfd E \lfd 0 \end{equation} be a sequence representing an element of $\operatorname{Ext}^1_{C} (E,E)$. If we apply the functor $j_\ast $ to the above sequence we obtain the exact sequence \begin{equation}\label{seqb} B: \quad 0\lfd j_\ast (E) \lfd j_\ast (F) \lfd j_\ast (E) \lfd 0. \end{equation} One checks immediately that the map $\phi ([A]) = [B]$ is well defined. If $\phi ([A]) = 0$ then $\phi ([A])$ is represented by the extension \begin{equation}\label{spez1} \quad 0\lfd j_\ast (E) \lfd j_\ast (E)\oplus j_\ast (E) \lfd j_\ast (E) \lfd 0. \end{equation} Now applying the functor $j^\ast $ to the above sequence and noting that $j^\ast (j_\ast (H))$ $\cong H$ for every vector bundle $H$ on $C$ we obtain the split exact sequence \begin{equation}\label{spez2} \quad 0\lfd E \lfd E\oplus E\lfd E \lfd 0. \end{equation} Therefore $\phi ([A]) = 0$ implies $[A]=0$ and $\phi$ is injective. \end{proof} Mukai proved that the moduli space of simple sheaves on an abelian surface $X$ is symplectic; more precisely, the Yoneda pairing $$\upsilon\colon\operatorname{Ext}_X^1(E,E)\times \operatorname{Ext}_X^1(E,E) \to \operatorname{Ext}_X^2(E,E)\cong \mathbb C$$ defines a holomorphic symplectic form on the moduli of simple sheaves on $X$ (cf. \cite{Mu2,Mu4}). When $\dim X=2n>2$ to define a symplectic form on the smooth locus of the moduli space one needs to choose a symplectic form $\omega$ on $X$. The symplectic form is then defined by the compositions (cf.~\cite{K}) \begin{eqnarray} \operatorname{Ext}_X^1(E,E)\times\operatorname{Ext}_X^1(E,E) &\to& \operatorname{Ext}_X^2(E,E) \stackrel{\o{tr}}{\relbar\joinrel\relbar\joinrel\to} H^2(X,\cO_X) \nonumber \\ &\stackrel{\sim}{\to}& H^{0,2}(X,{\mathbb C}) \stackrel{\lambda}{\relbar\joinrel\relbar\joinrel\to} H^{n,n}(X,{\mathbb C}) \cong{\mathbb C} \label{sf}\end{eqnarray} where $\o{tr}$ is the trace morphisms and the map $\lambda$ is obtained by wedging by $\omega^{n}\wedge\bar\omega^{n-1}$. \begin{thm}\label{tiram} If $g(C)$ is even, and the map $\tilde{\jmath}$ embeds ${\mathcal M}(r,d)$ into the smooth locus ${\mathcal M}_{J(C)}^0(r,d)$ of ${\mathcal M}_{J(C)}^\mu(r,d)$, then the subvarieties ${\mathcal M}_C(r,d)$ are isotropic with respect to any of the symplectic forms defined by equation (\ref{sf}). In particular, when $g(C)=2$ the subvarieties ${\mathcal M}_C(r,d)$ are Lagrangian with respect to the Mukai form of ${\mathcal M}^{\mu}_{J(C)}(r,d)$. \end{thm} \begin{proof} Since ${\mathcal M}_{J(C)}^0(r,d)$ is smooth, and $\tilde\jmath\colon {\mathcal M}(r,d)\to {\mathcal M}_{J(C)}^0(r,d)$ is injective and is an immersion, it is also an embedding. Now, let $E\in {\mathcal M}_C(r,d)$. It is enough to show that the Yoneda product \begin{eqnarray*} \operatorname{Ext}^1_{J(C)} (j_\ast (E), j_\ast (E)) &\times& \operatorname{Ext}^1_{J(C)} (j_\ast (E), j_\ast (E)) \\ && \lfd \operatorname{Ext}^2_{J(C)} (j_\ast (E), j_\ast (E)) \end{eqnarray*} vanishes when applied to pairs $([A],[B])$ of elements in $\operatorname{Ext}^1_{J(C)} (j_\ast (E),$ $j_\ast (E))$ where $[A]$ and $[B]$ are represented, respectively, by the sequences \begin{equation*} A:\qquad 0\lfd j_\ast (E) \stackrel{\nu}\lfd j_\ast (F) \stackrel{ p}\lfd j_\ast (E) \lfd 0 \end{equation*} \begin{equation*} B: \qquad 0\lfd j_\ast (E) \stackrel{i}\lfd j_\ast (G) \stackrel{ \lambda}\lfd j_\ast (E) \lfd 0 \end{equation*} with $F,G\in {\mathcal M}_C(r,d)$. It is enough to remark that the product of the classes of the sequences of sheaves on $C$ $$ 0 \to E \to F\to E \to 0\,,\qquad 0 \to E \to G \to E \to 0$$ is zero for dimensional reasons, and apply the functor $j_\ast$. In the case $g(C)=2$ the moduli space is smooth by the results in \cite{Mu2}; moreover, $$\dim{\mathcal M}_{J(C)}^\mu(r,d)=2(r^2+1)=2\dim{\mathcal M}_C(r,d)\,.$$ \end{proof} \begin{remark} If we consider the moduli space ${\mathcal M}_C(r,\xi)$ of stable bundles on $C$ of rank $r$ and fixed determinant isomorphic to $\xi$, then the result is trivial: the variety ${\mathcal M}_C(r,\xi)$ is Fano, so that it carries no holomorphic forms. $\blacktriangle$\end{remark} \par\addvspace{8mm}\par \section{The case $g(C)=2$\label{exb}} In this section we elaborate on the case $g(C)=2$. One can characterize situations where the moduli space ${\mathcal M}_{J(C)}^\mu(r,d)$ is compact. This happens for instance in the following case. \begin{prop} Assume $g(C)=2$, $d>4r$ and that $\rho=d-r$ is a prime number. Then every Gieseker-semistable sheaf on $J(C)$ with Chern character $(d-r,-r\Theta,0)$ is $\mu$-stable. Moreover, if $d>r^2+r$, every such sheaf is locally free (this always happens when $r=1,2,3$). \end{prop} \begin{proof} Since $d-r$ is prime, every sheaf in ${\mathcal M}_{J(C)}(r,d)$ is properly stable. Let $[F]\in{\mathcal M}_{J(C)}(r,d)$ and assume that the subsheaf $G$ destabilizes $F$. Let $\o{ch}(G)=(\sigma,\xi,s)$. Standard computations show that if $F$ is not $\mu$-stable then $$\frac{\xi\cdot\Theta}{\sigma}=-\frac{2r}{\rho}\qquad\text{and}\qquad s<0\,.$$ Setting $n=\xi\cdot\Theta$ we have $\vert n\vert =2r\sigma/\rho$, with $\sigma<\rho$ and $\rho>3r$. This is impossible whenever $\rho$ is prime. The statement about local freeness follows from the Bogomolov inequality. \end{proof} In the case $g(C)=2$ the complex Lagrangian embedding $\tilde \jmath\colon {\mathcal M}_C(r,d)$ $\to {\mathcal M}^{\mu}_{J(C)}(r,d)$ provides new examples of \emph{special Lagrangian submanifolds.} We refer to \cite{HL,ML} for the definition and the main properties of these objects. Now, if $X$ is a hyper-K\"ahler manifold of complex dimension $2n$, let $I$, $J$, $K$ be three complex structures compatible with the hyper-K\"ahler metric, and such that $IJ=K$. Let $\omega_I$, $\omega_J$, $\omega_K$ be the corresponding K\"ahler forms. Then the 2-form $\Omega=\omega_I+i\omega_J$ is a holomorphic symplectic form in the complex structure $K$. It is easy to check that a $K$-complex $n$-dimensional submanifold which is Lagrangian with respect to $\Omega$ is special Lagrangian in the structure $J$ \cite{H2}. One should notice that via the Hitchin-Kobayashi correspondence (which identifies $\mu$-stable bundles on a K\"ahler manifold with irreducible Einstein-Hermite bundles, cf.~\cite{K}), the space ${\mathcal M}^{\mu}_{J(C)}(r,d)$ acquires a hyper-K\"ahler structure, compatible with a K\"ahler form provided by the Weil-Petersson metric, and with a holomorphic symplectic form which may be identified with the Mukai form \cite{I}. Therefore we obtain the following result. \begin{prop} The space ${\mathcal M}^{\mu}_{J(C)}(r,d)$ has a complex structure such that $\tilde \jmath\colon {\mathcal M}_C(r,d)$ $\to {\mathcal M}^{\mu}_{J(C)}(r,d)$ is a special Lagrangian submanifold. \end{prop} The elements of the Jacobian variety $J(C)$ act on the embedding $j\colon C\to J(C)$ by translation, so that for every $x\in J(C)$ we have a special Lagrangian submanifold $\tilde \jmath_x\colon {\mathcal M}_C(r,d)\to {\mathcal M}^{\mu}_{J(C)}(r,d)$. This provides a family of deformations of $\tilde \jmath({\mathcal M}_C(r,d))$ through special Lagrangian submanifolds. As one easily shows, this embeds $J(C)$ into the moduli space ${\mathcal M}_{SL}$ of special Lagrangian deformations of $\tilde \jmath({\mathcal M}_C(r,d))$ (notice that $\dim_{{\mathbb R}}\newcommand{\Z}{\mathcal M}_{SL}=b_1({\mathcal M}_C(r,d))=4=\dim_{{\mathbb R}}\newcommand{\Z}(J(C))$) \cite{N}. The case $r=1$ is somehow trivial because ${\mathcal M}^{\mu}_{J(C)}(1,d)\simeq J(C)\times J(C)$ by a result of Mukai \cite{Mu1}. {\bf Acknowledgements.} We thank A.~Maciocia and M.S.~Narasimhan for useful suggestions or remarks. \end{document}
arXiv
\begin{document} \title{Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics} \author{Hedy Attouch\thanks{IMAG, Univ. Montpellier, CNRS, Montpellier, France. E-mail: \url{[email protected]}.} \and Zaki Chbani\thanks{Cadi Ayyad Univ., Faculty of Sciences Semlalia, Mathematics, 40000 Marrakech, Morroco. E-mail: \url{[email protected]}.} \and Jalal Fadili\thanks{Normandie Universit\'e, ENSICAEN, UNICAEN, CNRS, GREYC, France. E-mail: \url{[email protected]}.} \and Hassan Riahi\thanks{Cadi Ayyad Univ., Faculty of Sciences Semlalia, Mathematics, 40000 Marrakech, Morroco. E-mail: \url{[email protected]}.} } \date{} \maketitle \begin{abstract} In this paper, we propose in a Hilbertian setting a second-order time-continuous dynamic system with fast convergence guarantees to solve structured convex minimization problems with an affine constraint. The system is associated with the augmented Lagrangian formulation of the minimization problem. The corresponding dynamics brings into play three general time-varying parameters, each with specific properties, and which are respectively associated with viscous damping, extrapolation and temporal scaling. By appropriately adjusting these parameters, we develop a Lyapunov analysis which provides fast convergence properties of the values and of the feasibility gap. These results will naturally pave the way for developing corresponding accelerated ADMM algorithms, obtained by temporal discretization. \end{abstract} \noindent \textbf{Keywords:~} Augmented Lagrangian; ADMM; damped inertial dynamics; convex constrained minimization; convergence rates; Lyapunov analysis; Nesterov accelerated gradient method; temporal scaling. \noindent \textbf{AMS subject classification} 37N40, 46N10, 49M30, 65B99, 65K05, 65K10, 90B50, 90C25. \section{Introduction}\label{sec:prel} Our paper is part of the active research stream that studies the relationship between continuous-time dissipative dynamical systems and optimization algorithms. From this perspective, damped inertial dynamics offer a natural way to accelerate these systems. An abundant literature has been devoted to the design of the damping terms, which is the basic ingredient of the optimization properties of these dynamics. In line with the seminal work of Polyak on the heavy ball method with friction \cite{Polyak1,Polyak2}, the first studies have focused on the case of a fixed viscous damping coefficient \cite{Alv0,AGR,AABR}. A decisive step was taken in \cite{SBC} where the authors considered inertial dynamics with an asymptotic vanishing viscous damping coefficient. In doing so, they made the link with the accelerated gradient method of Nesterov \cite{Nest1,Nest4,BT} for unconstrained convex minimization. This has resulted in a flurry of research activity; see {\it e.g.}\,\, \cite{AC1,AC2,AC2R-EECT,ACR-rescale,ACPR,ACR-subcrit,AP,APR,APR2,AD,AAD,Bot-Cest2,CD,May,SDJS,WRJ}. In this paper, we consider the case of affinely \textit{constrained} convex structured minimization problems. To bring back the problem to the unconstrained case, there are two main ways: either penalize the constraint (by external penalization or an internal barrier method), or use (augmented) Lagrangian multiplier methods. Accounting for approximation/penalization terms within dynamical systems has been considered in a series of papers; see \cite{ACP,BCL} and the references therein. It is a flexible approach which can be applied to non-convex problems and/or ill-posed problems, making it a valuable tool for inverse problems. Its major drawback is that in general, it requires a subtle tuning of the approximation/penalization parameter. Here, we will consider the augmented Lagrangian approach and study the convergence properties of a second-order inertial dynamic with damping, which is attached to the augmented Lagrangian formulation of the affinely constrained convex minimization problem. The proposed dynamical system can be viewed as an inertial continuous-time counterpart of the ADMM method originally proposed in the mid-1970s and which has gained considerable interest in the recent years, in particular for solving large-scale composite optimization problems arising in data science. Among the novelties of our work, the dynamics we propose involves three parameters which vary in time. These are associated with viscous damping, extrapolation, and temporal scaling. By properly adjusting these parameters, we will provide fast convergence rates both for the values and the feasibility gap. The balance between the viscosity parameter (which tends towards zero) and the extrapolation parameter (which tends towards infinity) has already been developed in \cite{ZLC}, \cite{HHF} and \cite{ACR-Optimization-2020}, though for different problems. Temporal scaling techniques were considered in \cite{ABCR} for the case of convex minimization without affine constraint; see also \cite{ACFR,ACR-rescale,ACR-Pafa-2020}. Thus, another key contribution of this paper is to show that the temporal scaling and extrapolation can be extended to the class of ADMM-type methods with improved convergence rates. Working with general coefficients and in general Hilbert spaces allows us to encompass the results obtained in the above-mentioned papers and to broaden their scope. It has been known for a long time that the optimality conditions of the (augmented) Lagrangian formulation of convex structured minimization problems with an affine constraint can be equivalently formulated as a monotone inclusion problem; see \cite{Rock1,Rock2,Rock3}. In turn, the problem can be converted into finding the zeros of a maximally monotone operator, and can therefore be attacked using inertial methods for solving monotone inclusions. In this regard, let us mention the following recent works concerning the acceleration of ADMM methods via continuous-time inertial dynamics: \begin{enumerate}[label=$\bullet$] \item In \cite{Bot-Cest5}, the authors proposed an inertial ADMM by making use of the inertial version of the Douglas-Rachford splitting method for monotone inclusion problems recently introduced in \cite{BCH}, in the context of concomitantly solving a convex minimization problem and its Fenchel dual; see also \cite{Goldstein,PSB,PJ,PoonLiang} in the purely discrete setting. \item Attouch \cite{Att1} uses the maximally monotone operator which is associated with the augmented Lagrangian formulation of the problem, and specializes to this operator the inertial proximal point algorithm recently developed in \cite{AP-max} to solve general monotone inclusions. This gives rise to an inertial proximal ADMM algorithm where an appropriate adjustment of the viscosity and proximal parameters gives provably fast convergence properties, as well as the convergence of the iterates to saddle points of the Lagrangian function. This approach is in line with \cite{AS} who considered the case without inertia. But this approach fails to achieve a fully split inertial ADMM algorithm. \end{enumerate} \paragraph{Contents} In Section~\ref{sec:dynamic_formulation}, we introduce the inertial second-order dynamical system with damping (coined \eqref{eq:trials}) which is attached to the augmented Lagrangian formulation. In Section~\ref{sec:Lyap}, which is the main part of the paper, we develop a Lyapunov analysis to establish the asymptotic convergence properties of \eqref{eq:trials}. This gives rise to a system of inequalities-equalities which must be satisfied by the parameters of the dynamics. From the energy estimates thus obtained, we show in Section~\ref{Cauchy-problem} that the Cauchy problem attached to \eqref{eq:trials} is well-posed, {\it i.e.}\,\, existence and possibly uniqueness of a global solution. In Section~\ref{sec:strongly_convex}, we examine the case of the uniformly convex objectives. In Section~\ref{sec:particular}, we provide specific choices of the system parameters that satisfy our assumptions and achieve fast convergence rates. This is then supplemented by preliminary numerical illustrations. Some conclusions and perspectives are finally outlined in Section~\ref{sec:conclusion}. \section{Problem statement}\label{sec:dynamic_formulation} Consider the structured convex optimization problem: \begin{equation}\tag{${\mathcal P}$}\label{eq:P} \min_{x\in{\mathcal X}, \; y\in{\mathcal Y}} F(x,y) := f(x) + g(y) \quad \text{ subject to } Ax + By = c, \end{equation} where, throughout the paper, we make the following standing assumptions: \begin{equation}\tag{${\mathcal H}_{{\mathcal P}}$}\label{eq:HP} \hspace*{-0.5cm} \begin{cases} \begin{tabular}{l} ${\mathcal X},{\mathcal Y},\mathcal Z$ are real Hilbert spaces; \\ $f: {\mathcal X} \rightarrow {\mathbb R}, \; g: {\mathcal Y} \rightarrow {\mathbb R}$ are convex functions of class $\mathcal C^1$; \\ $A: {\mathcal X} \to \mathcal Z, B: {\mathcal Y} \to \mathcal Z$ are linear continuous operators, $c \in \mathcal Z$;\\ The solution set of \eqref{eq:P} is non-empty. \end{tabular} \end{cases} \end{equation} Throughout, we denote by $\dotp{\cdot}{\cdot}$ and $\norm{\cdot}$ the scalar product and corresponding norm associated to any of ${\mathcal X},{\mathcal Y},\mathcal Z$, and the underlying space is to be understood from the context. \subsection{Augmented Lagrangian formulation} Classically, \eqref{eq:P} can be equivalently reformulated as the saddle point problem \begin{equation}\label{eq:minmax} \min_{(x,y)\in {\mathcal X}\times {\mathcal Y}} \max_{\lambda\in \mathcal Z} {\mathcal L}(x, y, \lambda), \end{equation} where ${\mathcal L} : {\mathcal X} \times {\mathcal Y} \times \mathcal Z \rightarrow {\mathbb R}$ is the Lagrangian associated with \eqref{eq:minmax} \begin{equation}\label{eq:Lag} {\mathcal L}(x, y, \lambda) := F(x,y) +\langle\lambda , Ax+By-c\rangle . \end{equation} Under our standing assumption~\eqref{eq:HP}, ${\mathcal L}$ is convex with respect to $(x,y)\in {\mathcal X} \times{\mathcal Y}$, and affine (and hence concave) with respect to $\lambda \in \mathcal Z$. A pair $(x^\star,y^\star)$ is optimal for \eqref{eq:P}, and $\lambda^\star$ is a corresponding Lagrange multiplier if and only if $(x^\star, y^\star, \lambda^\star)$ is a saddle point of the Lagrangian function ${\mathcal L}$, {\it i.e.}\,\, for every $(x,y,\lambda) \in {\mathcal X} \times {\mathcal Y} \times \mathcal Z$, \begin{equation}\label{eq:saddlepoint} {\mathcal L}(x^\star,y^\star,\lambda) \leq {\mathcal L}(x^\star,y^\star,\lambda^\star) \leq {\mathcal L}(x,y,\lambda^\star). \end{equation} We denote by ${\mathscr S}$ the set of saddle points of ${\mathcal L}$. The corresponding optimality conditions read \begin{equation}\label{opt_system} (x^\star, y^\star, \lambda^\star)\in {\mathscr S} \Longleftrightarrow \begin{cases} \nabla_x {\mathcal L}(x^\star,y^\star,\lambda^\star)=0 \\ \nabla_y {\mathcal L}(x^\star,y^\star,\lambda^\star)=0 \\ \nabla_\lambda {\mathcal L}(x^\star,y^\star,\lambda^\star)=0 \end{cases} \Longleftrightarrow \begin{cases} \nabla f(x^\star)+A^*\lambda^\star=0 \\ \nabla g(y^\star)+B^*\lambda^\star=0 \\ Ax^\star+By^\star-c=0 \end{cases}, \end{equation} where we use the classical notations: $\nabla f$ and $\nabla g$ are the gradients of $f$ and $g$, $A^*$ is the adjoint operator of $A$, and similarly for $B$. The operator $\nabla_z$ is the gradient of the corresponding multivariable function with respect to variable $z$. Given $\mu > 0$, the augmented Lagrangian ${\mathcal L}_\mu : {\mathcal X} \times {\mathcal Y} \times \mathcal Z \rightarrow {\mathbb R} $ associated with the problem \eqref{eq:P}, is defined by \begin{equation}\label{eq:auglag} {\mathcal L}_\mu(x, y, \lambda) := {\mathcal L}(x, y, \lambda) + \frac\mu2 \|Ax + By - c\|^2. \end{equation} Observe that one still has $ (x^\star, y^\star, \lambda^\star)\in {\mathscr S} \Longleftrightarrow \begin{cases} \nabla_x {\mathcal L}_\mu(x^\star,y^\star,\lambda^\star)=0, \\ \nabla_y {\mathcal L}_\mu(x^\star,y^\star,\lambda^\star)=0, \\ \nabla_\lambda {\mathcal L}_\mu(x^\star,y^\star,\lambda^\star)=0. \end{cases} $ \subsection{The inertial system \eqref{eq:trials}} We will study the asymptotic behaviour, as $t \to +\infty$, of the inertial system: \boxed{ \begin{array}{rcl} && \text{\eqref{eq:trials}: Temporally Rescaled Inertial Augmented Lagrangian System.} \\ \hline \\ && \begin{cases} \ddot x(t)+\gamma(t)\dot x(t) + b(t)\nabla_x{\mathcal L}_\mu \Big(x(t),y(t),\lambda(t)+\alpha(t) \dot\lambda(t)\Big) &=0 \\ \ddot y(t)+\gamma(t)\dot y(t) + b(t)\nabla_y{\mathcal L}_\mu \Big(x(t),y(t),\lambda(t)+\alpha(t) \dot\lambda (t)\Big) &=0 \\ \ddot \lambda(t)+\gamma(t)\dot \lambda(t) - b(t)\nabla_\lambda {\mathcal L}_\mu \Big(x(t)+\alpha(t) \dot x(t),y(t)+\alpha(t) \dot y(t),\lambda(t)\Big)&=0, \end{cases} \end{array} } \noindent for $t \in [t_0,+\infty[$ with initial conditions $(x(t_0),y(t_0),\lambda(t_0))$ and $(\dot x(t_0),\dot y(t_0), \dot \lambda(t_0))$. The parameters of \eqref{eq:trials} play the following roles: \begin{enumerate}[label=$\bullet$] \item $\gamma(t)$ is a viscous damping parameter, \item $\alpha(t)$ is an extrapolation parameter, \item $b(t)$ is attached to the temporal scaling of the dynamic. \end{enumerate} In the sequel, we make the following standing assumption on these parameters: \begin{equation}\tag{${\mathcal H}_{{\mathcal D}}$}\label{eq:HD} \hspace*{-10pt} \gamma, \alpha, b: [t_0, +\infty[ \to {\mathbb R}^+ \text{ are non-negative continuously differentiable functions}. \end{equation} Plugging the expression of the partial gradients of ${\mathcal L}_\mu$ into the above system, the Cauchy problem associated with \eqref{eq:trials} is written as follows, where we unambiguously remove the dependence of $(x,y,\lambda)$ on $t$ to lighten the formula, \begin{equation}\tag{\rm{TRIALS}} \hspace*{-0.5cm} \begin{cases} \ddot x+\gamma (t) \dot x + b(t) \bpa{\nabla f (x) + A^* \brac{\lambda + \alpha(t) \dot\lambda + \mu (Ax+By-c)}} &=0 \\ \ddot y+\gamma (t)\dot y + b(t)\bpa{\nabla g (y) + B^* \brac{\lambda + \alpha(t) \dot\lambda + \mu (Ax+By-c)}} &=0 \\ \ddot \lambda+\gamma (t)\dot \lambda - b(t) \bpa{A(x + \alpha(t)\dot x) + B(y + \alpha(t)\dot y) -c} &= 0 \\ (x(t_0),y(t_0),\lambda(t_0)) = (x_0,y_0,\lambda_0) \enskip \text{and} \enskip \\ (\dot x(t_0),\dot y(t_0),\dot \lambda(t_0)) = (u_0,v_0,\nu_0) . \end{cases}\label{eq:trials} \end{equation} If in addition to \eqref{eq:HP}, the gradients of $f$ and $g$ are Lipschitz continuous on bounded sets, we will show later in Section~\ref{Cauchy-problem} that the Cauchy problem associated with \eqref{eq:trials} has a unique global solution on $[t_0,+\infty[$. Indeed, although the existence and uniqueness of a local solution follows from the standard non-autonomous Cauchy-Lipschitz theorem, the global existence necessitates the energy estimates derived from the Lyapunov analysis in the next section. The centrality of these estimates is the reason why the proof of well-posedness is deferred to Section~\ref{Cauchy-problem}. Thus, for the moment we take for granted the existence of classical solutions to \eqref{eq:trials}. \subsection{A fast convergence result} Our Lyapunov analysis will allow us to establish convergence results and rates under very general conditions on the parameters of \eqref{eq:trials}, see Section~\ref{sec:Lyap}. In fact, there are many situations of practical interest where such conditions are easily verified, and which will be discussed in detail in Section~\ref{sec:particular}. Thus for the sake of illustration and reader convenience, here we describe an important situation where convergence occurs with the fast rate ${\mathcal O}(1/t^2)$. \begin{theorem} \label{thm:O-1/t2} Suppose that the coefficients of \eqref{eq:trials} satisfy \[ \alpha(t)= \alpha_{0} t \mbox{ with } \alpha_{0} >0, \; \gamma(t) = \frac{\eta+\alpha_0}{\alpha_0 t}, \; b(t)= t^{\frac{1}{\alpha_0}-2} , \] where $\eta > 1$. Suppose that the set of saddle points ${\mathscr S}$ is non-empty and let $(x^\star,y^\star,\lambda^\star) \in {\mathscr S}$. Then, for any solution trajectory $(x(\cdot),y(\cdot),\lambda(\cdot))$ of \eqref{eq:trials}, the trajectory remains bounded, and we have the following convergence rates: \begin{align*} {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) &= {\mathcal O}\pa{\frac{1}{t^{\frac{1}{\alpha_0}}}}, \\ \norm{Ax(t)+By(t)-c}^2 &= {\mathcal O}\pa{\frac{1}{t^{\frac{1}{\alpha_0}}}} , \\ -\frac{C_1}{t^{\frac{1}{2\alpha_0}}} \leq F( x(t),y(t))-F(x^\star,y^\star) &\leq \frac{C_2}{t^{\frac{1}{\alpha_0}}} , \\ \anorm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t)} &= {\mathcal O}\pa{\dfrac1{t}} . \end{align*} where $C_1$ and $C_2$ are positive constants. In particular, for $\alpha_0 = \demi$, {\it i.e.}\,\, no time scaling $b \equiv 1$, we have \begin{align*} {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) &= {\mathcal O}\pa{\frac{1}{t^{2}}}, \\ \norm{Ax(t)+By(t)-c}^2 &= {\mathcal O}\pa{\frac{1}{t^{2}}} , \\ -\frac{C_1}{t} \leq F( x(t),y(t))-F(x^\star,y^\star) &\leq \frac{C_2}{t^{2}} . \end{align*} \end{theorem} For the ADMM algorithm (thus in discrete time $t=k h$, $k \in {\mathbb N}, h > 0$), it has been shown in \cite{DavisYin16,DavisYin17} that the convergence rate of (squared) feasibility is ${\mathcal O}\pa{\frac{1}{k}}$ and that on $|F(x_k,y_k)-F(x^\star,y^\star)|$ is ${\mathcal O}\pa{\frac{1}{k^{1/2}}}$. These rates were shown to be essentially tight in \cite{DavisYin16}. Our results then suggest than for $\alpha_0=\demi$, a proper discretization of \eqref{eq:trials} would lead to an accelerated ADMM algorithm with provably faster convergence rates (see \cite{Kang15,Kang13} in this direction on specific problem instances and algorithms). These discrete algorithmic issues of \eqref{eq:trials} will be investigated in a future work. Again, for $\alpha_0 = \demi$, the ${\mathcal O}\pa{\frac{1}{t^2}}$ rate obtained on the Lagrangian is reminiscent of the fast convergence obtained with the continuous-time dynamical version of the Nesterov accelerated gradient method in which the viscous damping coefficient is of the form $\gamma (t) = \frac{\gamma_0}{t}$ and the fast rate is obtained for $\gamma_0 \geq 3$; see \cite{ACPR,SBC}. With our notations this corresponds to $\gamma_0 = \frac{\eta+\alpha_0}{\alpha_0}$, and our choice $\alpha_0 = \demi$ entails $\gamma_0=2\eta+1 > 3$. This corresponds to the same critical value as Nesterov's but the inequality here is strict. This is not that surprising in our context since one has to handle the dual multiplier and there is an intricate interplay between $\gamma$ and the extrapolation coefficient $\alpha$. \subsection{The role of extrapolation} One of the key and distinctive features of \eqref{eq:trials} is that the partial gradients (with the appropriate sign) of the augmented Lagrangian function are not evaluated at $(x(t),y(t),\lambda(t))$ as it would the case in a classical continuous-time system associated to ADMM-type methods, but rather at extrapolated points. This new property will be instrumental to allow for faster convergence rates, and it can be interpreted from different standpoints: optimization, game theory, or control: \begin{enumerate}[label=$\bullet$] \item \textit{Optimization standpoint}: in this field, this type of extrapolation was recently studied in \cite{ACR-Optimization-2020,HHF,ZLC}. It will play a key role in the development of our Lyapunov analysis. Observe that $\alpha(t) \dot{x}(t)$ and $\alpha(t) \dot{\lambda}(t)$ point to the direction of future movement of $x(t)$ and $\lambda(t)$. Thus, \eqref{eq:trials} involves the estimated future positions $x(t) + \alpha(t) \dot{x}(t)$ and $\lambda(t) + \alpha(t) \dot{\lambda}(t)$. Explicit discretization $x_k + \alpha_k (x_{k}-x_{k-1})$ and $\lambda_k + \alpha_k (\lambda_{k}-\lambda_{k-1})$ gives an extrapolation similar to the accelerated method of Nesterov. The implicit discretization reads $x_k + \alpha_k (x_{k+1}-x_k)$ and $\lambda_k + \alpha_k (\lambda_{k+1}-\lambda_k)$. For $\alpha_k=1$, this gives $x_{k+1}$ and $\lambda_{k+1}$, which would yield implicit algorithms with associated stability properties. \item \textit{Game theoretic standpoint}: let us think about $(x,y)$ and $\lambda$ as two players playing against each other, and shortly speaking, we identify the players with their actions. We can then see that in \eqref{eq:trials}, each player anticipates the movement of its opponent. In the coupling term, the player $(x,y)$ takes account of the anticipated position of the player $\lambda$, which is $\lambda(t) + \alpha(t) \dot{\lambda}(t)$, and vice versa. \item \textit{Control theoretic standpoint}: the structure of \eqref{eq:trials} is also related to control theory and state derivative feedback. By defining $w(t)= (x(t), y(t), \lambda(t))$ the equation can be written in an equivalent way \[ \ddot{w}(t) + \gamma (t) \dot{w}(t) = K(t,w(t), \dot{w}(t)), \] for an operator $K$ appropriately identified from \eqref{eq:trials} in terms of the partial gradients of ${\mathcal L}_\mu$, $\alpha$ and $b$. In this system, the feedback control term $K$, which takes the constraint into account, is not only a function of the state $w(t)$ but also of its derivative. One can consult \cite{MVHN} for a comprehensive treatment of state derivative feedback. Indeed, we will use $\alpha(\cdot)$ as a control variable, which will turn to play an important role in our subsequent developments. \end{enumerate} \subsection{Associated monotone inclusion problem} The optimality system \eqref{opt_system} can be written equivalently as \begin{equation}\label{descrip00} T_{{\mathcal L}}(x, y, z) = 0 , \end{equation} where $T_{{\mathcal L}}: {\mathcal X} \times {\mathcal Y} \times \mathcal Z \to {\mathcal X} \times {\mathcal Y} \times \mathcal Z $ is the maximally monotone operator associated with the convex-concave function ${\mathcal L}$, and which is defined by \begin{eqnarray} T_{{\mathcal L}}(x, y, \lambda) &= &\left( \nabla_{x,y} {\mathcal L}, \, -\nabla_{\lambda} {\mathcal L} \right)(x, y, \lambda) \nonumber \\ &=& \left(\nabla f(x) + A^* \lambda, \; \nabla g(y) + B^* \lambda, \; -( Ax +By-c)\right).\label{descrip01} \end{eqnarray} Indeed, it is immediate to verify that $T_{{\mathcal L}}$ is monotone using \eqref{eq:HP}. Since it is continuous, it is a maximally monotone operator. Another way of seeing it is to use the standard splitting of $T_{{\mathcal L}}$ as $ T_{{\mathcal L}}= T_1 + T_2 $ where \begin{eqnarray*} T_1(x, y, \lambda)& =& \left(\nabla f(x) , \ \nabla g(y) , 0 \right)\\ T_2 (x, y, \lambda) &=& \left( A^* \lambda, \ B^* \lambda, \ -( Ax +By-c) \right). \end{eqnarray*} The operator $T_1 = \partial \Phi $ is nothing but gradient of the convex function $\Phi (x,y,\lambda) = f(x) + g(y)$, and therefore is maximally monotone owing to \eqref{eq:HP} (recall that convexity of a differentiable function implies maximal monotonicity of its gradient \cite{Rock1}). The operator $T_2$ is obtained by translating a linear continuous and skew-symmetric operator, and therefore it is also maximally monotone. This immediately implies that $T_{{\mathcal L}}$ is maximally monotone as the sum of two maximally monotone operators, one of them being Lipschitz continuous (\cite[Lemma~2.4, page~34]{Bre1}). In turn, ${\mathscr S}$ can be interpreted as the set of zeros of the maximally monotone operator $T_{{\mathcal L}}$. As such, it is a closed convex subset of ${\mathcal X} \times {\mathcal Y} \times \mathcal Z$. The evolution equation associated to $T_{{\mathcal L}}$ is written \begin{equation}\label{basic-dyn} \left\{\begin{array}{lll} \; \dot{x}(t) + \nabla f (x(t)) + A^* (\lambda(t)) &=&0 \\ \; \dot{y}(t) + \nabla g (y(t)) +B^* (\lambda(t)) &=&0 \\ \; \dot{\lambda}(t) - (A(x(t)) + B(y(t)) -c)&=&0 \end{array}\right. \end{equation} Following \cite{Bre1}, the Cauchy problem \eqref{basic-dyn} is well-posed, and the solution trajectories of \eqref{basic-dyn}, which define a semi-group of contractions generated by $T_{{\mathcal L}}$, converge weakly in an ergodic sense to equilibria, which are the zeros of the operator $T_{{\mathcal L}}$. Moreover, appropriate implicit discretization of \eqref{basic-dyn} yields the proximal ADMM algorithm. The situation is more complicated if we consider the corresponding inertial dynamics. Indeed, the convergence theory for the heavy ball method can be naturally extended to the case of maximally monotone cocoercive operators. Unfortunately, because of the skew-symmetric component $T_2$ in $T_{{\mathcal L}}$ (when $c=0$), the operator $T_{{\mathcal L}}$ is \textit{not} cocoercive. To overcome this difficulty, recent studies consider inertial dynamics where the operator $T_{{\mathcal L}}$ is replaced by its Yosida approximation, with an appropriate adjustment of the Yosida parameter; see \cite{AP-max} and \cite{Att1} in the case of the Nesterov accelerated method. However, such an approach does not achieve full splitting algorithms, hence requiring an additional internal loop. \section{Lyapunov analysis}\label{sec:Lyap} Let $(x^\star,y^\star) \in {\mathcal X} \times {\mathcal Y}$ be a solution of \eqref{eq:P}, and denote by $F^\star := F(x^\star,y^\star)$ the optimal value of \eqref{eq:P}. For the moment, the variable $\lambda^\star$ is chosen arbitrarily in $\mathcal Z $. We will then be led to specialize it. Let $t \mapsto (x(t),y(t),\lambda(t))$ be a solution trajectory of \eqref{eq:trials} defined for $t\geq t_0$. It is supposed to be a classical solution, {\it i.e.}\,\, of class ${\mathcal C}^2$. We are now in position to introduce the function $t \in [t_0, +\infty[\; \mapsto {\mathcal E}(t)\in {\mathbb R}$ that will serve as a Lyapunov function, \begin{eqnarray} &&{\mathcal E}(t):= \delta^2(t)b(t)\Big( {\mathcal L}_\mu(x(t), y(t), \lambda^\star)- {\mathcal L}_\mu(x^\star, y^\star, \lambda^\star)\Big)+ \frac{1}{2}\norm{v(t)}^{2} \label{eq:lyapcont}\\ && \qquad\qquad + \frac12\xi(t)\|(x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)\|^2, \nonumber \\ && v(t):= \sigma (t)\Big((x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)\Big)+\delta(t)(\dot x(t), \dot y(t), \dot \lambda(t)). \label{eq:lyapcont_b} \end{eqnarray} The coefficient $\sigma(t)$ is non-negative and will be adjusted later, while $\delta(t), \xi(t)$ are explicitly defined by the following formulas: \begin{equation}\label{basic_choice_0} \begin{cases} \delta(t) := \sigma(t) \alpha(t), \\ \xi(t) := \sigma(t)^2\Big(\gamma(t)\alpha (t)-\dot \alpha (t)-1 \Big) -2 \alpha(t) \sigma(t) \dot \sigma(t) \end{cases} \end{equation} This choice will become clear from our Lyapunov analysis. To guarantee that ${\mathcal E}$ is a Lyapunov function for the dynamical system \eqref{eq:trials}, the following conditions on the coefficients $\gamma, \, \alpha, \, b, \, \sigma$ will naturally arise from our analysis: \begin{center} \begin{tabular}{|c|}\hline Lyapunov system of inequalities/equalities on the parameters. \\\hline \parbox{\textwidth}{ \begin{enumerate}[label=(${\mathcal G}_{\arabic*}$),itemindent=10ex] \item $\sigma(t)\bpa{\gamma(t)\alpha (t)-\dot \alpha (t) -1} -2 \alpha(t) \dot \sigma(t) \geq 0$, \label{cond:G1} \item $\sigma (t)\bpa{\gamma(t)\alpha (t)- \dot \alpha (t) - 1} - \alpha(t)\dot{\sigma} (t) \geq 0$, \label{cond:G2} \item $-\frac{d}{dt}\brac{\sigma(t) \pa{\sigma (t)\bpa{\gamma (t)\alpha(t) - \dot\alpha (t)} -2 \alpha (t)\dot\sigma (t)}} \geq 0$, \label{cond:G3} \item $\alpha (t)\sigma (t)^2 b(t) -\frac{d}{dt}\left(\alpha^2 \sigma^2 b\right)(t)= 0$. \label{cond:G4} \end{enumerate}}\\\hline \end{tabular} \end{center} Observe that condition \ref{cond:G1} automatically ensures that $\xi(t)$ is a non-negative function. In most practical situations (see Section~\ref{sec:particular}), we will take $\sigma$ as a non-negative constant, in which case \ref{cond:G1} and \ref{cond:G2} coincide, and thus conditions \ref{cond:G1}--\ref{cond:G4} reduce to a system of three differential inequalities/equalities involving only the coefficients $(\gamma,\alpha,b)$ of the dynamical system \eqref{eq:trials}. \if { \begin{equation} \label{basic_choice_0} \delta (t)= \sigma(t) \alpha (t), \end{equation} \begin{equation}\label{def:xi_0} \xi (t):=\sigma (t)^2\Big(\dot \alpha (t)+\gamma(t)\alpha (t)-1 \Big) -2 \alpha(t) \sigma(t) \dot \sigma(t). \end{equation} } \fi \if { In the part $(a)$ of the theorem, we only assume that the solution set of \eqref{eq:P} is non-empty, and only get an upper bound of $F(x(t),y(t))-F^\star $. In the part $(b)$ of the theorem we assume that the solution set ${\mathscr S}$ of equilibria of the saddle value problem is non-empty, then get an upper bound of the absolute value $| F(x(t),y(t))-F^\star| $. } \fi \subsection{Convergence rate of the values} By relying on a Lyapunov analysis with the function ${\mathcal E}$, we are now ready to state our first main result. \begin{theorem}\label{ACFR,rescale} Assume that \eqref{eq:HP} and \eqref{eq:HD} hold. Suppose that the growth conditions \ref{cond:G1}--\ref{cond:G4} on the parameters $(\gamma, \alpha, \sigma, b)$ of \eqref{eq:trials} are satisfied for all $t\geq t_0$. Let $t\in [t_0, +\infty[ \mapsto (x(t),y(t),\lambda(t))$ be a solution trajectory of \eqref{eq:trials}. Let ${\mathcal E}$ be the function defined in \eqref{eq:lyapcont}-\eqref{eq:lyapcont_b}. Then the following holds: \begin{enumerate}[label=(\arabic*)] \item ${\mathcal E}$ is a non-increasing function, and for all $t\geq t_0$ \label{theoACFRrescale:itema} \[ F(x(t),y(t))-F^\star = {\mathcal O}\pa{\frac{1}{\alpha(t)^2\sigma(t)^2 b(t)}} . \] \item Suppose moreover that ${\mathscr S}$, the set of saddle points of ${\mathcal L}$ in \eqref{eq:minmax} is non-empty, and let $(x^\star,y^\star,\lambda^\star) \in {\mathscr S}$. Then for all $t\geq t_0$, the following rates and integrability properties are satisfied: \label{theoACFRrescale:itemb} \begin{enumerate}[label=(\roman*),itemindent=10ex] \item \label{theoACFRrescale:itembi} $ 0 \leq {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star)={\mathcal O}\pa{\frac {1}{\alpha(t)^2\sigma(t)^2 b(t)}}; $ \item \label{theoACFRrescale:itembii} $ \anorm{Ax(t)+By(t)-c}^2={\mathcal O}\pa{\frac {1}{\alpha(t)^2\sigma(t)^2 b(t)}}; $ \item there exists positive constants $C_1$ and $C_2$ such that \label{theoACFRrescale:itembiii} \[ -\frac{C_1}{\alpha(t)\sigma(t) \sqrt{b(t)}} \leq F(x(t),y(t))-F^\star \leq \frac{C_2}{\alpha(t)^2\sigma(t)^2 b(t)}; \] \item \label{theoACFRrescale:itembiv} $ \displaystyle{\int_{t_0}^{+\infty} \alpha(t)\sigma(t)^2b(t)\anorm{Ax(t)+By(t)-c}^2 dt <+\infty} ; $ \item \label{theoACFRrescale:itembv} $ \displaystyle{\int_{t_0}^{+\infty} k(t) \anorm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t))}^2 dt <+\infty}, $ where \[ k(t)= \alpha (t)\sigma(t) \bpa{\sigma(t)\pa{\gamma(t)\alpha (t) - \dot \alpha(t) - 1} - \alpha(t) \dot \sigma (t)}. \] \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} To lighten notation, we drop the dependence on the time variable $t$. Recall that $(x^\star,y^\star)$ is a solution of \eqref{eq:P} and $\lambda^\star$ is an arbitrary vector in $\mathcal Z$. Let us define \[ w := (x, y, \lambda), \quad w^\star:= (x^\star, y^\star, \lambda^\star),\quad {\mathcal F}_\mu(w) := {\mathcal L}_\mu(x, y, \lambda^\star)- {\mathcal L}_\mu(x^\star, y^\star, \lambda^\star). \] With these notations we have (recall \eqref{eq:lyapcont} and \eqref{eq:lyapcont_b}) \begin{eqnarray*} && v=\sigma( w-w^\star)+\delta\dot w,\, \\ && \nabla {\mathcal F}_\mu(w)=(\nabla_x {\mathcal L}_\mu(x, y, \lambda^\star), \nabla_y {\mathcal L}_\mu(x, y, \lambda^\star),0)\\ && {\mathcal E}= \delta^2b{\mathcal F}_\mu(w) + \frac{1}{2}\norm{v}^{2} +\frac12\xi\anorm{w-w^\star}^2. \end{eqnarray*} Differentiating ${\mathcal E}$ gives \begin{equation}\label{der-E} \dfrac{d}{dt}{\mathcal E}=\dfrac{d}{dt}(\delta^2b) {\mathcal F}_\mu(w)+ \delta^2b \dotp{\nabla {\mathcal F}_\mu(w)}{\dot{w}}+ \dotp{v}{\dot{v}} + \frac12\dot\xi\|w-w^\star\|^2 + \xi \dotp{w-w^\star}{\dot{w}}. \end{equation} Using the constitutive equation in \eqref{eq:trials}, we have \begin{align*} \dot{v} & = \dot \sigma (w-w^\star) + (\sigma + \dot\delta )\dot w + \delta\ddot w \\ & = \dot \sigma (w-w^\star) + (\sigma + \dot\delta )\dot w - \delta \pa{\gamma\dot w + b K_{\mu,\alpha}(w)} \\ & = \dot \sigma (w-w^\star) + (\sigma + \dot\delta - \delta \gamma)\dot w - \delta b K_{\mu,\alpha}(w) , \end{align*} where the operator $K_{\mu,\alpha}: {\mathcal X} \times {\mathcal Y} \times \mathcal Z \to {\mathcal X} \times {\mathcal Y} \times \mathcal Z $ is defined by \begin{equation*} K_{\mu,\alpha}(w) := \begin{bmatrix} \nabla_x{\mathcal L}_\mu(x,y,\lambda+\alpha \dot\lambda )\\ \nabla_y{\mathcal L}_\mu(x,y,\lambda+\alpha \dot\lambda )\\ -\nabla_\lambda {\mathcal L}_\mu(x+\alpha \dot x,y+\alpha \dot y,\lambda) \end{bmatrix} \end{equation*} Elementary computation gives \begin{equation*} K_{\mu,\alpha}(w) = \nabla {\mathcal F}_\mu(w) + \begin{bmatrix} A^*(\lambda-\lambda^\star+\alpha \dot\lambda ) \\ B^*(\lambda-\lambda^\star+\alpha \dot\lambda ) \\ -A(x+\alpha \dot x)-B(y+\alpha \dot y)+c \end{bmatrix} \end{equation*} According to the above formulas for $v$, $\dot v$ and $K_{\mu,\alpha}$, we get \begin{eqnarray*} \dotp{v}{\dot{v}} &=& \dotp{\dot \sigma (w-w^\star) + (\sigma + \dot \delta - \delta \gamma)\dot w - \delta b K_{\mu,\alpha}(w)}{\sigma( w-w^\star)+\delta\dot w}\\ &=& \sigma \dot\sigma\anorm{w-w^\star}^2 + \pa{\delta\dot\sigma+\sigma(\sigma+\dot\delta-\delta\gamma)}\dotp{\dot w}{w-w^\star} + \delta\pa{\sigma+\dot\delta-\delta\gamma} \anorm{\dot w}^2\\ && -\delta b\brac{\sigma\dotp{\nabla{\mathcal F}_\mu(w )}{w-w^\star} + \delta \dotp{\nabla{\mathcal F}_\mu(w)}{\dot w}}\\ && -\delta b\brac{\sigma \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{Ax-Ax^\star} + \delta \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{A\dot x}}\\ && -\delta b\brac{\sigma \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{By-By^\star} + \delta \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{B\dot y}}\\ && +\delta b \sigma \dotp{A(x+\alpha \dot x)+B(y+\alpha \dot y)-c}{\lambda-\lambda^\star} \\ &&+ \delta^2 b \dotp{A(x+\alpha \dot x)+B(y+\alpha \dot y)-c}{\dot \lambda}. \end{eqnarray*} Let us insert this expression in \eqref{der-E}. We first observe that the term $\dotp{\nabla{\mathcal F}_\mu(w)}{\dot w}$ appears twice but with opposite signs, and therefore cancels out. Moreover, the coefficient of $\langle \dot w,w-w^\star\rangle$ becomes $\xi+ \delta\dot\sigma -\sigma(\gamma\delta -\dot \delta- \sigma)$. Thanks to the choice of $\delta$ and $\xi$ devised in \eqref{basic_choice_0}, the term $\dotp{\dot w}{w-w^\star}$ also disappears. We recall that by virtue of \ref{cond:G1}, $\xi$ is non-negative, and thus so is the last term in ${\mathcal E}$. Overall, the formula \eqref{der-E} simplifies to \begin{equation}\label{basic-Lyap1} \begin{split} \dfrac{d}{dt}{\mathcal E} &= \dfrac{d}{dt}(\delta^2b) {\mathcal F}_\mu(w)+ \pa{\frac12\dot\xi + \sigma\dot\sigma} \anorm{w-w^\star}^2 + \delta\pa{\sigma+\dot\delta-\delta\gamma}\anorm{\dot w}^2 \\ &-\delta b \sigma \dotp{\nabla{\mathcal F}_\mu(w)}{w-w^\star} - \delta b {\mathcal W} , \end{split} \end{equation} where \begin{eqnarray*} {\mathcal W} &:=& \sigma \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{Ax-Ax^\star} + \delta \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{A\dot x}\\ && + \sigma \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{By-By^\star} + \delta \dotp{\lambda-\lambda^\star+\alpha \dot\lambda}{B\dot y}\\ && - \sigma \dotp{A(x+\alpha \dot x)+B(y+\alpha \dot y)-c}{\lambda-\lambda^\star} \\ && - \delta \dotp{A(x+\alpha \dot x)+B(y+\alpha \dot y)-c}{\dot \lambda}. \end{eqnarray*} Since $(x^\star,y^\star) \in {\mathcal X}\times{\mathcal Y} $ is a solution of \eqref{eq:P}, we obviously have $A x^\star + B y^\star =c$. Thus, ${\mathcal W}$ reduces to \begin{eqnarray*} {\mathcal W} &=& \sigma \dotp{Ax+By-c}{\lambda-\lambda^\star+\alpha \dot\lambda} + \delta \dotp{A\dot x + B\dot y}{\lambda-\lambda^\star+\alpha \dot\lambda}\\ && - \sigma \dotp{Ax+By-c}{\lambda-\lambda^\star} - \sigma\alpha\dotp{A\dot x + B\dot y}{\lambda-\lambda^\star}\\ && - \delta \dotp{Ax+By-c}{\dot\lambda} - \delta\alpha\dotp{A\dot x + B\dot y}{\dot\lambda} \\ &=& (\sigma \alpha - \delta)\bpa{\dotp{Ax+By-c}{\dot \lambda} - \dotp{A \dot x + B \dot y}{\lambda-\lambda^\star}}. \end{eqnarray*} Since it is difficult to control the sign of the above expression, the choice of $\delta$ in \eqref{basic_choice_0} appears natural, which entails ${\mathcal W} =0$. On the other hand, by convexity of ${\mathcal L}(\cdot,\cdot,\lambda^\star)$, strong convexity of $\frac{\mu}{2}\anorm{\cdot - c}^2$, the fact that $Ax^\star+By^\star = c$ and ${\mathcal F}_\mu(w^\star)=0$, it is straightforward to see that \[ -{\mathcal F}_\mu(w) - \frac{\mu}{2}\norm{Ax(t)+By(t) - c}^2 \geq \dotp{\nabla {\mathcal F}_\mu(w)}{w^\star-w}. \] Collecting the above results, \eqref{basic-Lyap1} becomes \begin{eqnarray}\label{basic-lyap-1} &&\dfrac{d}{dt}{\mathcal E} + \pa{\delta b \sigma-\dfrac{d}{dt}(\delta^2b)}{\mathcal F}_\mu(w) \\ &&\leq \pa{\frac12\dot\xi + \sigma\dot\sigma} \anorm{w-w^\star}^2 + \delta\pa{\sigma+\dot\delta-\delta\gamma}\anorm{\dot w}^2 - \frac{\delta b \sigma \mu}{2}\norm{Ax(t)+By(t) - c}^2. \nonumber \end{eqnarray} Since $\delta$ is non-negative ($\sigma$ and $\alpha$ are), and in view of \ref{cond:G2}, the coefficient of the second term in the right hand side \eqref{basic-lyap-1} is non-positive. The same conclusion holds for the coefficient of the first term since its non-positivity is equivalent to \ref{cond:G3}. Therefore, inequality \eqref{basic-lyap-1} implies \begin{equation}\label{basic-Liap-22} \dfrac{d}{dt}{\mathcal E} +\left( \delta b \sigma-\dfrac{d}{dt}(\delta^2b) \right){\mathcal F}_\mu(w) \leq 0 . \end{equation} The sign of ${\mathcal F}_\mu(w)$ is unknown for arbitrary $\lambda^\star$. This is precisely where we invoke \ref{cond:G4} which is equivalent to \begin{equation*}\label{def:sigma3} \delta b \sigma-\dfrac{d}{dt}(\delta^2b)= 0. \end{equation*} \begin{enumerate}[label=(\arabic*)] \item Altogether, we have shown so far that \eqref{basic-Liap-22} eventually reads, for any $t\geq t_0$, \begin{equation}\label{Lypa_0} \dfrac{d}{dt}{\mathcal E}(t) \leq 0 , \end{equation} {\it i.e.}\,\, ${\mathcal E}$ is non-increasing as claimed. Let us now turn to the rates. ${\mathcal E}$ being non-increasing entails that for all $t\geq t_0$ \begin{equation}\label{Lypa_decr} {\mathcal E}(t) \leq {\mathcal E}(t_0) . \end{equation} Dropping the non-negative terms $\frac{1}{2}\norm{v(t)}^{2}$ and $\frac12\xi(t)\|w(t)-w^\star\|^2$ entering ${\mathcal E}$, and according to the definition of ${\mathcal L}_\mu$, we obtain that, for all $t\geq t_0$ \begin{eqnarray} &&\delta(t)^2b(t)\bpa{{\mathcal L}_\mu(x(t), y(t), \lambda^\star)- {\mathcal L}_\mu(x^\star, y^\star, \lambda^\star)} \label{rat-conv-L1}\\ &&= \delta(t)^2b(t) \bpa{{\mathcal L}(x(t), y(t), \lambda^\star)- {\mathcal L}(x^\star, y^\star, \lambda^\star) + \frac{\mu}{2}\norm{Ax(t)+By(t) - c}^2} \leq {\mathcal E}(t_0) . \nonumber \end{eqnarray} Dropping again the quadratic term in \eqref{rat-conv-L1}, we obtain \begin{align*} \delta(t)^2b(t) &\bpa{F(x(t),y(t))-F^\star + \dotp{\lambda^\star}{Ax(t)+By(t)-c}} \\ &\leq \delta^2(t_0)b(t_0)\Big(F(x(t_0),y(t_0))-F^\star + \dotp{\lambda^\star}{Ax(t_0)+By(t_0)-c} \\ &+\frac{\mu}{2}\anorm{Ax(t_0)+By(t_0)-c}^2\Big)+ \frac{1}{2}\anorm{v(t_0)}^{2}\\ &+\frac12\xi(t_0)\anorm{(x(t_0), y(t_0), \lambda(t_0))-(x^\star, y^\star, \lambda^\star)}^2\\ &\leq \delta^2(t_0)b(t_0)\anorm{\lambda^\star}\anorm{Ax(t_0)+By(t_0)-c} + C_0, \end{align*} where $C_0$ is the non-negative constant \begin{multline}\label{eq:C0} C_0 = \delta^2(t_0)b(t_0)\bpa{\abs{F(x(t_0),y(t_0))-F^\star} +\frac{\mu}{2}\anorm{Ax(t_0)+By(t_0)-c}^2} \\ + \frac{1}{2}\anorm{v(t_0)}^{2} + \frac12\xi(t_0)\anorm{(x(t_0), y(t_0), \lambda(t_0))-(x^\star, y^\star, \lambda^\star)}^2 . \end{multline} When $Ax(t)+By(t)-c = 0$, we are done by taking, {\it e.g.}\,\, $\lambda^\star = 0$ and $C > C_0$. Assume now that $Ax(t)+By(t)-c \neq 0$. Since $ \lambda^\star$ can be freely chosen in $\mathcal Z$, we take it as the unit-norm vector \begin{equation}\label{eq:lambdaunit} \lambda^\star = \frac{Ax(t)+By(t)-c}{\anorm{Ax(t)+By(t)-c}} . \end{equation} We therefore obtain \begin{equation}\label{eq:estimatelyap} \delta(t)^2b(t) \bpa{F(x(t),y(t))-F^\star + \anorm{Ax(t)+By(t)-c}} \leq C , \end{equation} where $C > \delta^2(t_0)b(t_0)\norm{Ax(t_0)+By(t_0)-c} + C_0$. Since the second term in the left hand side is non-negative, the claimed rate in \ref{theoACFRrescale:itema} follows immediately. \item Embarking from \eqref{rat-conv-L1} and using \eqref{eq:saddlepoint} since $(x^\star,y^\star,\lambda^\star) \in {\mathscr S}$, we have the rates stated in \ref{theoACFRrescale:itembi} and \ref{theoACFRrescale:itembii}. To show the lower bound in \ref{theoACFRrescale:itembiii}, observe that the upper-bound of \eqref{eq:saddlepoint} entails that \begin{equation}\label{basic_minoration} F(x(t), y(t)) \geq F(x^\star, y^\star) - \dotp{Ax(t)+ By(t) - c}{\lambda^\star} . \end{equation} Applying Cauchy-Schwarz inequality, we infer \[ F(x(t), y(t)) \geq F(x^\star, y^\star) - \anorm{\lambda^\star}\anorm{Ax(t)+ By(t)-c}. \] We now use the estimate \ref{theoACFRrescale:itembii} to conclude. Finally the integral estimates of the feasibility \ref{theoACFRrescale:itembiv} and velocity \ref{theoACFRrescale:itembv} are obtained by integrating \eqref{basic-lyap-1}. \qed \end{enumerate} \end{proof} \if { \begin{remark} {~} \begin{enumerate} \item From \eqref{basic-lyap-1} and \ref{cond:G3}, one has also that $\pa{\frac12\dot\xi + \sigma\dot\sigma} \anorm{w-w^\star}^2 \in L^1([t_0,+\infty[)$. However, as we will argue in Section~\ref{sec:particular}, our conditions on the parameters $(\gamma,\alpha,b,\sigma)$ are easily verifiable when $\sigma$ is taken as a positive constant, and $\gamma\alpha - \dot{\alpha}$ a constant larger than. In this case, \ref{cond:G3} is actually an equality and the above integral estimate becomes vacuous. \item {\tcr{Jalal:~ If one can prove that the limit of $\norm{w(t)-w^\star}$ exists, then by appropriately strengthening \ref{cond:G2} and \ref{cond:G4}, the rate in Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itembii} and the lower rate in \ref{theoACFRrescale:itembiii} can be improved to $o(.)$.}} \item {\tcr{Jalal:~Convergence of the iterates is completely open at this point and is a challenging problem for the system \eqref{eq:trials}}.} \end{enumerate} \end{remark} } \fi \subsection{Boundedness of the trajectory and rate of the velocity} We will further exploit the Lyapunov analysis developed in the previous section to assert additional properties on the iterates and velocities. \begin{theorem}\label{ACFR_rescale_boundedness} Suppose the assumptions of Theorem~\ref{ACFR,rescale} hold. Assume also that ${\mathscr S}$, the set of saddle points of ${\mathcal L}$ in \eqref{eq:minmax} is non-empty, and let $(x^\star, y^\star, \lambda^\star) \in {\mathscr S}$. Then, each solution trajectory $t\in [t_0, +\infty[ \mapsto (x(t),y(t),\lambda(t))$ of \eqref{eq:trials} satisfies the following properties: \begin{enumerate}[label=(\arabic*)] \item There exists a positive constant $C$ such that, for all $t \geq t_0$ \begin{eqnarray*} &&\anorm{(x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)}^2 \leq \frac{C}{\sigma (t)^2\Big(\gamma(t)\alpha (t)-\dot \alpha (t)-1 \Big) -2 \alpha(t) \sigma(t) \dot \sigma(t)} \\ &&\norm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t))} \leq \frac{C}{\alpha(t)\sigma(t)}\pa{1+\sqrt{\frac{\sigma(t)}{\sigma (t)\pa{\gamma(t)\alpha (t)-\dot \alpha (t)-1} -2 \alpha(t) \dot \sigma(t)}}}. \end{eqnarray*} \item \label{ACFR_rescale_boundedness:itemii} If $\sup_{t \geq t_0} \sigma(t) < +\infty$ and \ref{cond:G1} is strengthened to \begin{enumerate}[label=(${\mathcal G}_{\arabic*}^{+}$),itemindent=10ex] \item $\inf_{t\geq t_0} \sigma(t)\bpa{\sigma (t)\pa{\gamma(t)\alpha (t)-\dot \alpha (t)-1} - 2 \alpha(t) \dot \sigma(t)} >0$, \label{cond:G1+} \end{enumerate} then \begin{align*} \sup_{t\geq t_0} \norm{(x(t), y(t), \lambda(t))} < +\infty \enskip \text{and} \enskip \norm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t))} = {\mathcal O}\pa{\frac{1}{\alpha(t)\sigma(t)}} . \end{align*} If moreover, \begin{enumerate}[label=(${\mathcal G}_{\arabic*}$),itemindent=10ex,start=5] \item $\inf_{t \geq t_0} \alpha(t) > 0$, \label{cond:G5} \end{enumerate} then \[ \sup_{t\geq t_0} \norm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t))} < +\infty . \] \end{enumerate} \end{theorem} \begin{proof} We start from \eqref{Lypa_decr} in the proof of Theorem~\ref{ACFR,rescale}, which can be equivalently written \begin{equation*}\label{eq:lyapcont_1} \begin{split} &\delta^2(t)b(t)\pa{{\mathcal L}_\mu(x(t), y(t), \lambda^\star)- {\mathcal L}_\mu(x^\star, y^\star, \lambda^\star)} + \frac{1}{2}\anorm{v(t)}^{2} \\ &+\frac12\xi(t)\anorm{(x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)}^2 \leq {\mathcal E}(t_0). \end{split} \end{equation*} Since $(x^\star,y^\star,\lambda^\star) \in {\mathscr S}$, the first term is non-negative by \eqref{eq:saddlepoint}, and thus \begin{equation*} \frac{1}{2}\norm{v(t)}^{2} + \frac12\xi(t)\anorm{(x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)}^2 \leq {\mathcal E}(t_0). \end{equation*} Choosing a positive constant $C \geq \sqrt{2{\mathcal E}(t_0)}$, we immediately deduce that for all $t\geq t_0$ \begin{equation}\label{basic_maj_3} \anorm{(x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)} \leq \frac{C}{\sqrt{\xi(t)}} \quad \text{and} \quad \anorm{v(t)} \leq C . \end{equation} Set $z(t)= (x(t), y(t), \lambda(t))-(x^\star, y^\star, \lambda^\star)$. By definition of $v(t)$, we have \[ v(t)= \sigma(t) z(t) + \delta(t) \dot{z}(t). \] From the triangle inequality and the bound \eqref{basic_maj_3}, we get \[ \delta(t)\anorm{\dot{z}(t)} \leq C \pa{1+\frac{\sigma(t)}{\sqrt{\xi(t)}}}. \] According to the definition \eqref{basic_choice_0} of $\delta (t)$ and $\xi(t)$, we get \[ \anorm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t))} \leq \frac{C}{\alpha(t)\sigma(t)}\pa{1+\sqrt{\frac{\sigma(t)}{\sigma (t)\pa{\gamma(t)\alpha (t)-\dot \alpha (t)-1} - 2 \alpha(t) \dot \sigma(t)}}}, \] which ends the proof. \qed \end{proof} \subsection{The role of $\alpha$ and time scaling} The time scaling parameter $b$ enters the conditions on the parameters only via \ref{cond:G4}, which therefore plays a central role in our analysis. Now consider relaxing \ref{cond:G4} to the inequality \begin{enumerate}[label=(${\mathcal G}_{\arabic*}^{+}$),start=4,itemindent=30ex] \item $\frac{d}{dt}\left(\alpha^2 \sigma^2 b\right)(t) - \alpha (t)\sigma (t)^2 b(t) \geq 0$. \label{cond:G4+} \end{enumerate} This is a weaker assumption in which case the corresponding term in \eqref{basic-Liap-22} does not vanish. However, such an inequality can still be integrated to yield meaningful convergence rates. This is what we are about to prove. \begin{theorem}\label{Lyap_gen} Suppose the assumptions of Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb} hold, where condition \ref{cond:G4} is replaced with \ref{cond:G4+}. Let $(x^\star,y^\star,\lambda^\star) \in {\mathscr S} \neq \emptyset$. Assume also that $\inf F(x,y) > -\infty$. Then, for all $t \geq t_0$ \begin{align} {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) &= {\mathcal O}\pa{\exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds}}, \label{rat-conv-L*-gen} \\ \norm{Ax(t)+By(t)-c}^2 &= {\mathcal O}\pa{\exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds}} , \label{rat-conv-L*-gen-b} \\ -C_1\exp\pa{-\int_{t_0}^t\frac{1}{2\alpha(s)}ds} \leq F( x(t),y(t))-F^\star &\leq C_2\exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds} , \label{rat-conv-L*-gen-obj} \end{align} where $C_1$ and $C_2$ are positive constants. \end{theorem} \begin{proof} We embark from \eqref{basic-Liap-22} in the proof of Theorem~\ref{ACFR,rescale}. In view of \ref{cond:G4+} and \eqref{basic_choice_0}, \eqref{basic-Liap-22} becomes \begin{equation}\label{eq:Eode} 0\geq \dfrac{d}{dt}{\mathcal E} - \pa{\frac{d}{dt}\pa{\alpha^2 \sigma^2 b}-\alpha \sigma ^2 b} {\mathcal F}_\mu(w) \geq \dfrac{d}{dt}{\mathcal E} - \dfrac{\frac{d}{dt}\pa{\alpha^2 \sigma^2 b}-\alpha \sigma ^2 b}{\alpha^2 \sigma^2 b}{\mathcal E} . \end{equation} Since $(x^\star,y^\star,\lambda^\star) \in {\mathscr S}$, ${\mathcal F}_\mu$ is non-negative and so is the Lyapunov function ${\mathcal E}$. Integrating \eqref{eq:Eode}, we obtain the existence of a positive constant $C$ such that, for all $t\geq t_0$ \[ 0 \leq {\mathcal E}(t) \leq C \alpha(t)^2 \sigma (t)^2 b(t) \exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds}, \] which entails, after dropping the positive terms in ${\mathcal E}$, \begin{equation}\label{eq:Fmurate} {\mathcal F}_\mu(w(t))\leq C \exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds}. \end{equation} \eqref{rat-conv-L*-gen} and \eqref{rat-conv-L*-gen-b} follow immediately from \eqref{eq:Fmurate} and the definition of ${\mathcal F}_\mu$. Let us now turn to \eqref{rat-conv-L*-gen-obj}. Arguing as in the proof of Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb}, we have \begin{align*} F( x(t),y(t))-F^\star \geq -\anorm{\lambda^\star}\anorm{Ax(t)+By(t)-c} . \end{align*} Plugging \eqref{rat-conv-L*-gen-b} in this inequality yields the lower-bound of \eqref{rat-conv-L*-gen-obj}. For the upper-bound, we will argue as in the proof of Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itema} by considering $\lambda^\star$ as a free variable in $\mathcal Z$. By assumption, we have $F$ is bounded from below. This together with \eqref{rat-conv-L*-gen-b} implies that ${\mathcal E}$ is also bounded from below, and we denote $\underline{{\mathcal E}}$ this lower-bound. Define $\tilde{{\mathcal E}}(t) = {\mathcal E}(t) - \underline{{\mathcal E}}$ if $\underline{{\mathcal E}}$ is negative and $\tilde{{\mathcal E}}(t) = {\mathcal E}(t)$ otherwise. Thus, from \eqref{eq:Eode}, it is easy to see that $\tilde{{\mathcal E}}$ verifies \begin{equation}\label{eq:Etode} \dfrac{d}{dt}\tilde{{\mathcal E}} \leq \dfrac{\frac{d}{dt}\pa{\alpha^2 \sigma^2 b}-\alpha \sigma ^2 b}{\alpha^2 \sigma^2 b}\tilde{{\mathcal E}} . \end{equation} Integrating \eqref{eq:Etode} and arguing with the sign of $\underline{{\mathcal E}}$, we get the existence of a positive constant $C$ such that, for all $t\geq t_0$ \[ {\mathcal E}(t) \leq \tilde{{\mathcal E}}(t) \leq C \alpha(t)^2 \sigma (t)^2 b(t) \exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds} . \] Dropping the quadratic terms in ${\mathcal E}$, this yields \begin{align*} F(x(t),y(t))-F^\star + \dotp{\lambda^\star}{Ax(t)+By(t)-c} \leq C \exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds} . \end{align*} When $Ax(t)+By(t)-c = 0$, we are done by taking, {\it e.g.}\,\, $\lambda^\star = 0$. Assume now that $Ax(t)+By(t)-c \neq 0$ and choose \[ \lambda^\star = \frac{Ax(t)+By(t)-c}{\anorm{Ax(t)+By(t)-c}} . \] We arrive at \begin{align*} F(x(t),y(t))-F^\star \leq F(x(t),y(t))-F^\star + \anorm{Ax(t)+By(t)-c} \leq C \exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)}ds} , \end{align*} which completes the proof. \qed \end{proof} \begin{remark}\label{rem:Lyap_gen} Though the rates in Theorem~\ref{ACFR,rescale} and Theorem~\ref{Lyap_gen} look apparently different, it turns out that as expected, those of Theorem~\ref{ACFR,rescale} are actually a specialisation of those in Theorem~\ref{Lyap_gen} when \ref{cond:G4+} holds as an equality, {\it i.e.}\,\, \ref{cond:G4} is verified. To see this, it is sufficient to realize that, with the notation $a(t) := \alpha(t)^2 \sigma(t)^2 b(t)$, \ref{cond:G4} is equivalent to $\dot{a}(t) = \frac{1}{\alpha (t)}a(t)$. Upon integration, we obtain $a(t) = \exp\left(\int_{t_0}^t\frac{1}{\alpha(s)} ds\right) $, or equivalently \[ \frac{1}{\alpha(t)^2 \sigma(t)^2 b(t)}= \exp\left(-\int_{t_0}^t\frac{1}{\alpha(s)} ds\right). \] \end{remark} \section{Well-posedness of \eqref{eq:trials}}\label{Cauchy-problem} In this section, we will show existence and uniqueness of a strong global solution to the Cauchy problem associated with~\eqref{eq:trials}. The main idea is to formulate \eqref{eq:trials} in the phase space as a non-autonomous first-order system. In the smooth case, we will invoke the non-autonomous Cauchy-Lipschitz theorem \cite[Proposition~6.2.1]{haraux91}. In the non-smooth case, we will use a standard Moreau-Yosida smoothing argument. \if { \begin{lemma}\label{haraux}\mbox{\rm (\cite[Prop. 6.2.1]{haraux91})} Let $G:I\times\mathcal Z\rightarrow \mathcal Z$ where $I=[t_0,+\infty[$ and $\mathcal Z$ is a Banach space. Assume that \, (i) for every $z\in \mathcal Z$, $G(\cdot,z)\in L^1_{loc}(I, \mathcal Z)$; \, (ii) for a.e. $t\in I$, for every $z_1, \, z_2\in \mathcal Z$, $$ \| G(t,z_1)-G(t,z_2)\|\leq K(t,\|z_1\|+\|z_2\|)\|z_1-z_2\|, \text{ where } K(\cdot, r)\in L^1_{loc}(I), \forall r\in\mathbb R_+; $$ \, (iii) for a.e. $t\in I$, for every $z\in \mathcal Z$, $$ \| G(t,z)\|\leq P(t)(1+\| z \|),\text{ where } P\in L^1_{loc}(I). $$ Then, for every $s\in I, z\in \mathcal Z$, there exists a unique solution $u_{s,z}\in W^{1,1}_{loc}(I,\mathcal Z)$ of the Cauchy problem: \begin{center} $\dot u_{s,z}(t)=G(t,u_{s,z}(t))$ for a.e. $t\in I$, and $u_{s,z}(s)=z$. \end{center} \end{lemma} } \fi \subsection{Case of globally Lipschitz continuous gradients} We consider first the case where the gradients of $f$ and $g$ are globally Lipschitz continuous over ${\mathcal X}$ and ${\mathcal Y}$. Let us start by recalling the notion of strong solution. \begin{definition}\label{def:strongsol} Denote ${\mathcal H} := {\mathcal X} \times {\mathcal Y} \times \mathcal Z$ equipped with the corresponding product space structure, and $w: t \in [t_0,+\infty[ \mapsto (x(t),y(t),\lambda(t)) \in {\mathcal H}$. The function $w$ is a strong global solution of the dynamical system \eqref{eq:trials} if it satisfies the following properties: \begin{enumerate}[label=$\bullet$] \item $w$ is in ${\mathcal C}^1([0,+\infty[;{\mathcal H})$; \item $w$ and $\dot w$ are absolutely continuous on every compact subset of the interior of $[t_0,+\infty[$ (hence almost everywhere differentiable); \item for almost all $t \in [t_0,+\infty[$, \eqref{eq:trials} holds with $w(t_0) = (x_0,y_0,\lambda_0)$ and $\dot w(t_0) = (u_0,v_0,\nu_0))$. \end{enumerate} \end{definition} \begin{theorem}\label{thm:wellglobal} Suppose that \eqref{eq:HP} holds\footnote{Actually, convexity is not needed here.} and, moreover, that $\nabla f$ and $\nabla g$ are Lipschitz continuous, respectively over ${\mathcal X}$ and ${\mathcal Y}$. Assume that $\gamma, \, \alpha, \, b: [t_0, +\infty[ \to {\mathbb R}^+$ are non-negative continuous functions. Then, for any given initial condition $(x(t_0), \dot{x}(t_0))=(x_0,\dot x_0)\in {\mathcal X} \times {\mathcal X}$, $(y(t_0), \dot{y}(t_0))=(y_0,\dot y_0)\in {\mathcal Y} \times {\mathcal Y}$, $(\lambda(t_0), \dot{\lambda}(t_0))=(\lambda_0,\dot \lambda_0)\in \mathcal Z \times \mathcal Z$, the evolution system \eqref{eq:trials} has a unique strong global solution. \end{theorem} \begin{proof} Recall the notations of Definition~\ref{def:strongsol}. Let $I = [t_0,+\infty[$ and let $Z: t \in I \mapsto (w(t),\dot w(t)) \in {\mathcal H}^2$. \eqref{eq:trials} can be equivalently written as the Cauchy problem on ${\mathcal H}^2$ \begin{equation}\label{syst1g} \begin{cases} \dot Z(t) + G(t,Z(t)) = 0 & \text{ for } t \in I, \\ Z(t_0)=Z_0 , \end{cases} \end{equation} where $Z_0=(x_0,y_0,\lambda_0,u_0,v_0,\nu_0)$, and $G: I \times {\mathcal H}^2 \to {\mathcal H}^2$ is the operator \begin{equation}\label{def:G} G(t, (x,y,\lambda),(u,v,\nu))= \begin{pmatrix} -u \\ -v \\ -\nu \\ \gamma(t)u + b(t)\bpa{\nabla f(x) + A^*\pa{\lambda+\alpha(t)\nu + \mu (Ax+By-c)}} \\ \gamma(t)v + b(t)\bpa{\nabla g (y) + B^*\pa{\lambda+\alpha(t)\nu + \mu (Ax+By-c)}} \\ \gamma(t)\nu - b(t)\bpa{A(x + \alpha(t)u)) + B(y + \alpha(t)v) - c} \end{pmatrix} . \end{equation} To invoke \cite[Proposition~6.2.1]{haraux91}, it is sufficient to check that for a.e. $t \in I$, $G(t,\cdot)$ is $\beta(t)$-Lipschitz continuous with $\beta(\cdot) \in L^1_{loc}(I)$, and for a.e. $t \in I$, $G(t,Z) = {\mathcal O}(P(t)(1+\anorm{Z})$, $\forall Z \in {\mathcal H}^2$, with $P(\cdot) \in L^1_{loc}(I)$. Since $\nabla f$, $\nabla g$ are globally Lipschitz continuous, and $A$ and $B$ are bounded linear, elementary computation shows that there exists a constant $C > 0$ such that \[ \anorm{G(t,Z) - G(t,\bar Z)} \leq C \beta(t) \anorm{Z - \bar{Z}} , \quad \beta(t) = 1+\gamma(t)+b(t)(1+\alpha(t)) . \] Owing to the continuity of the parameters $\gamma (\cdot)$, $\alpha (\cdot)$, $b(\cdot)$, $\beta(\cdot)$ is integrable on $[t_0,T]$ for all $t_0 < T < +\infty$. Similar calculation shows that \[ \anorm{G(t,(x,y,\lambda),(u,v,\nu))} \leq C \beta(t) \bpa{\anorm{\pa{\nabla f(u),\nabla g(v)}}+\anorm{\pa{x,y,\lambda,u,v,\nu}}} , \] and we conclude similarly. It then follows from \cite[Proposition~6.2.1]{haraux91} that there exists a unique global solution $Z(\cdot) \in W^{1,1}_{loc}(I;{\mathcal H}^2)$ of \eqref{syst1g} satisfying the initial condition $Z(t_0)=Z_0$, and thus, by \cite[Corollary~A.2]{Bre1} that $Z(\cdot)$ is a strong global solution to \eqref{syst1g}. This in turn leads to the existence and uniqueness of a strong solution $(x(\cdot),y(\cdot),\lambda(\cdot))$ of \eqref{eq:trials}. \qed \end{proof} \begin{remark} One sees from the proof that for the above result to hold, it is only sufficient to assume that the parameters $\gamma$, $\alpha$, $b$ are locally integrable instead of continuous. In addition, in the above results, we even have existence and uniqueness of a classical solution. \end{remark} \subsection{Case of locally Lipschitz continuous gradients} Under local Lipschitz continuity assumptions on the gradients $\nabla f$ and $\nabla g$, the operator $Z \mapsto G(t,Z)$ defined in \eqref{def:G} is only Lipschitz continuous over the bounded subsets of ${\mathcal H}^2$. As a consequence, the Cauchy-Lipschitz theorem provides the existence and uniqueness of a local solution. To pass from a local solution to a global solution, we will rely on the estimates established in Theorem~\ref{ACFR_rescale_boundedness}. \begin{theorem}\label{thm:welllocal} Suppose that \eqref{eq:HP} holds\footnote{Again, convexity is superfluous here.} and, moreover, that $\nabla f$ and $\nabla g$ are Lipschitz continuous over the bounded subsets of respectively ${\mathcal X}$ and ${\mathcal Y}$. Assume that $\gamma, \, \alpha, \, b: [t_0, +\infty[ \to {\mathbb R}^+$ are non-negative continuous functions such that the conditions \ref{cond:G1+}, \ref{cond:G2}, \ref{cond:G3}, \ref{cond:G4} and \ref{cond:G5} are satisfied, and that $\sup_{t \geq t_0} \sigma(t) < +\infty$. Then, for any initial condition $(x(t_0), \dot{x}(t_0))=(x_0,\dot x_0)\in {\mathcal X} \times {\mathcal X}$, $(y(t_0), \dot{y}(t_0))=(y_0,\dot y_0)\in {\mathcal Y} \times {\mathcal Y}$, $(\lambda(t_0), \dot{\lambda}(t_0))=(\lambda_0,\dot \lambda_0)\in \mathcal Z \times \mathcal Z$, the evolution system \eqref{eq:trials} has a unique strong global solution. \end{theorem} \begin{proof} We use the same notation as in the proof of Theorem~\ref{thm:wellglobal}. Let us consider the maximal solution of the Cauchy problem \eqref{syst1g}, say $Z : [t_0, T[ \to {\mathcal H}^2$. We have to prove that $T=+\infty$. Following a classical argument, we argue by contradiction, and suppose that $T < +\infty$. It is then sufficient to prove that the limit of $Z(t)$ exists as $t \to T$, so that it will be possible to extend $Z$ locally to the right of $T$ thus getting a contradiction. According to the Cauchy criterion, and the constitutive equation $\dot Z(t) = G(t,Z(t))$, it is sufficient to prove that $Z(t)$ is bounded over $[t_0,T[$. At this point, we use the estimates provided by Theorem~\ref{ACFR_rescale_boundedness}, which gives precisely this result under the conditions imposed on the parameters. \qed \end{proof} \subsection{The non-smooth case} For a large number of applications ({\it e.g.}\,\, data processing, machine learning, statistics), non-smooth functions are ubiquitous. To cover these practical situations, we need to consider the case where the functions $f$ and $g$ are non-smooth. In order to adapt the dynamic \eqref{eq:trials} to this non-smooth situation, we will consider the corresponding differential inclusion \begin{equation} \begin{cases} \ddot x+\gamma (t) \dot x + b(t) \bpa{\partial f(x) + A^* \brac{\lambda + \alpha(t) \dot\lambda + \mu (Ax+By-c)}} &\ni 0 \\ \ddot y+\gamma (t)\dot y + b(t)\bpa{\partial g(y) + B^* \brac{\lambda + \alpha(t) \dot\lambda + \mu (Ax+By-c)}} &\ni 0 \\ \ddot \lambda+\gamma (t)\dot \lambda - b(t) \bpa{A(x + \alpha(t)\dot x) + B(y + \alpha(t)\dot y) -c} &= 0\\ (x(t_0),y(t_0),\lambda(t_0)) = (x_0,y_0,\lambda_0) \enskip \text{and} \enskip \\ (\dot x(t_0),\dot y(t_0),\dot \lambda(t_0)) = (u_0,v_0,\nu_0) , \end{cases}\label{eq:trialsnonsmooth} \end{equation} where $\partial f$ and $\partial g$ are the subdifferentials of $f$ and $g$, respectively. Beyond global existence issues that we will address shortly, one may wonder whether our Lyapunov analysis in the previous sections is still valid in this case. The answer is affirmative provided one takes some care in two main steps that are central in our analysis. First, when taking the time-derivative of the Lyapunov, one has to invoke now the (generalized) chain rule for derivatives over curves (see \cite{Bre1}). The second ingredient is the validity of the subdifferential inequality for convex functions. In turn all our results and estimates presented in the previous sections can be transposed to this more general non-smooth context. Indeed, our approximation scheme that we will present shortly turns out to be monotonically increasing. This gives a variational convergence (epi-convergence) which allows to simply pass to the limit over the estimates established in the smooth case. Let us now turn to the existence of a global solution to \eqref{eq:trialsnonsmooth}. We will again consider strong solutions to this problem, {\it i.e.}\,\, solutions that are ${\mathcal C}^1([t_0,+\infty[;{\mathcal H})$, locally absolutely continuous, and \eqref{eq:trialsnonsmooth} holds almost everywhere on $[t_0,+\infty[$. A natural idea is to use the Moreau-Yosida regularization in order to bring the problem to the smooth case before passing to an appropriate limit. Recall that, for any $\theta > 0$, the Moreau envelopes $f_{\theta}$ and $g_{\theta}$ of $f$ and $g$ are defined respectively by \[ f_{\theta} (x)= \min_{\xi \in {\mathcal X}} \bra{f(\xi) + \frac{1}{2 \theta} \anorm{x-\xi}^2}, \quad g_{\theta} (y)= \min_{\eta \in{\mathcal Y}} \bra{g(\eta)+ \frac{1}{2 \theta} \anorm{y-\eta}^2}. \] As a classical result, $f_{\theta}$ and $g_{\theta}$ are continuously differentiable and their gradients are $\frac{1}{\theta}$-Lipschitz continuous. We are then led to consider, for each $\theta >0$, the dynamical system \begin{equation} \begin{cases} \ddot x_\theta +\gamma (t) \dot x_\theta + b(t) \bpa{\nabla f_{\theta} (x_\theta) + A^* \brac{\lambda_\theta + \alpha(t) \dot\lambda_\theta + \mu (Ax_\theta+By_\theta-c)}} &=0 \\ \ddot y_\theta+\gamma (t)\dot y_\theta + b(t)\bpa{\nabla g_{\theta} (y_\theta) + B^* \brac{\lambda_\theta + \alpha(t) \dot\lambda_\theta + \mu (Ax_\theta+By_\theta-c)}} &=0 \\ \ddot \lambda_\theta+\gamma (t)\dot \lambda_\theta - b(t) \bpa{A(x_\theta + \alpha(t)\dot x_\theta) + B(y_\theta + \alpha(t)\dot y_\theta) -c} &= 0 . \end{cases}\label{eq:trialssmoothed} \end{equation} The system \eqref{eq:trialssmoothed} comes under our previous study, and for which we have existence and uniqueness of a strong global solution. In doing so, we generate a filtered sequence $(x_{\theta},y_{\theta},\lambda_{\theta})_{\theta}$ of trajectories.\\ The challenging question is now to pass to the limit in the system above as $ \theta \to 0^+$. This is a non-trivial problem, and to answer it, we have to assume that the spaces ${\mathcal X}$, ${\mathcal Y}$ and $\mathcal Z$ are finite dimensional, and that $f$ and $g$ are convex real-valued ({\it i.e.}\,\, $\mathrm{dom}(f) = {\mathcal X}$ and $\mathrm{dom}(g)={\mathcal Y}$ in which case $f$ and $g$ are continuous). Recall that $\partial F(x,y) = \partial f(x) \times \partial g(y)$, and denote $\brac{\partial F(x,y)}^0$ the minimal norm selection of $\partial F(x,y)$. \begin{theorem}\label{thm:wellnonsmooth} Suppose that ${\mathcal X}$, ${\mathcal Y}$, $\mathcal Z$ are finite dimensional Hilbert spaces, and that the functions $f: {\mathcal X} \to {\mathbb R} $ and $g: {\mathcal Y} \to {\mathbb R}$ are convex. Assume that \begin{enumerate}[label=(\roman*)] \item $F$ is coercive on the affine feasibility set; \label{cond:coer} \item $\beta_F := \sup_{(x,y) \in {\mathcal X} \times {\mathcal Y}} \anorm{\brac{\partial F(x,y)}^0}< +\infty$; \label{cond:subdiffbnd} \item the linear operator $L = [A ~ B]$ is surjective. \label{cond:Lsurj} \end{enumerate} Suppose also that $\gamma, \, \alpha, \, b: [t_0, +\infty[ \to {\mathbb R}^+$ are non-negative continuous functions such that the conditions \ref{cond:G1+}, \ref{cond:G2}, \ref{cond:G3} \ref{cond:G4} and \ref{cond:G5} are satisfied, and that $\sup_{t \geq t_0} \sigma(t) < +\infty$. Then, for any initial condition $(x(t_0), \dot{x}(t_0))=(x_0,\dot x_0)\in {\mathcal X} \times {\mathcal X}$, $(y(t_0), \dot{y}(t_0))=(y_0,\dot y_0)\in {\mathcal Y} \times {\mathcal Y}$, $(\lambda(t_0), \dot{\lambda}(t_0))=(\lambda_0,\dot \lambda_0)\in \mathcal Z \times \mathcal Z$, the evolution system \eqref{eq:trialsnonsmooth} admits a strong global solution . \end{theorem} Condition~\ref{cond:coer} is natural and ensures for instance that the solution set of \eqref{eq:P} is non-empty. Condition~\ref{cond:Lsurj} is also very mild. A simple case where \ref{cond:subdiffbnd} holds is when $f$ and $g$ are Lipschitz continuous. \begin{proof} The key property is that the estimates obtained in Theorem~\ref{ACFR,rescale} and \ref{ACFR_rescale_boundedness}, when applied to \eqref{eq:trialssmoothed}, have a favorable dependence on $\theta$. Indeed, a careful examination of the estimates shows that $\theta$ enters them through the Lyapunov function at $t_0$ only via $|F_{\theta}(x_0,y_0) - F_{\theta}(x_{\theta}^\star,y_{\theta}^\star)|$ and $\anorm{(x_{\theta}^\star,y_{\theta}^\star,\lambda_{\theta}^\star)}$, where $F_{\theta}(x,y)=f_{\theta}(x)+g_{\theta}(y)$, $(x_{\theta}^\star,y_{\theta}^\star) \in \argmin_{Ax+Bx=c} F_{\theta}(x,y)$ and $\lambda_{\theta}^\star$ is an associated dual multiplier; see \eqref{eq:C0}. With standard properties of the Moreau envelope, see \cite[Chapter~3]{AttouchBook} and \cite[Chapter~12]{BauschkeCombettes}, one can show that for all $(x,y) \in {\mathcal X} \times {\mathcal Y}$ \[ F(x,y) - \frac{\theta}{2}\anorm{\brac{\partial F(x,y)}^0}^2 \leq F_{\theta}(x,y) \leq F(x,y) . \] This, together with the fact that $(x_{\theta}^\star,y_{\theta}^\star) \in \argmin_{Ax+Bx=c} F_{\theta}(x,y)$ and $(x^\star,y^\star) \in \argmin_{Ax+Bx=c} F(x,y)$ yields \[ F_{\theta}(x_{\theta}^\star,y_{\theta}^\star) \leq F_{\theta}(x^\star,y^\star) \leq F(x^\star,y^\star) \leq F(x_{\theta}^\star,y_{\theta}^\star) . \] Thus \begin{multline*} F(x_0,y_0) - F(x^\star,y^\star) - \frac{\theta}{2}\anorm{\brac{\partial F(x_0,y_0)}^0}^2 \leq F_{\theta}(x_0,y_0) - F_{\theta}(x_{\theta}^\star,y_{\theta}^\star) \\ \leq F(x_0,y_0) - F(x^\star,y^\star) + \frac{\theta}{2}\anorm{\brac{\partial F(x_{\theta}^\star,y_{\theta}^\star)}^0}^2 . \end{multline*} This entails, owing to \ref{cond:subdiffbnd}, that \[ \abs{F_{\theta}(x_0,y_0) - F_{\theta}(x_{\theta}^\star,y_{\theta}^\star)} \leq \abs{F(x_0,y_0) - F(x^\star,y^\star)} + \frac{\beta_F^2\theta}{2} \] and thus, since we are interested in the limit as $\theta \to 0^+$, \[ \sup_{\theta \in [0,\bar{\theta}]} \abs{F_{\theta}(x_0,y_0) - F_{\theta}(x_{\theta}^\star,y_{\theta}^\star)} \leq \abs{F(x_0,y_0) - F(x^\star,y^\star)} + \frac{\beta_F^2\bar{\theta}}{2} < +\infty . \] On the other hand, \[ F(x_{\theta}^\star,y_{\theta}^\star) \leq F_{\theta}(x_{\theta}^\star,y_{\theta}^\star) + \frac{\beta_F^2\theta}{2} \leq F_{\theta}(x^\star,y^\star) + \frac{\beta_F^2\theta}{2} \leq F(x^\star,y^\star) + \frac{\beta_F^2\bar{\theta}}{2} . \] Thus, in view of \ref{cond:coer}, $\exists a > 0$ and $b \in {\mathbb R}$ such that \[ a\anorm{(x_{\theta}^\star,y_{\theta}^\star)} + b \leq F(x^\star,y^\star) + \frac{\beta_F^2\bar{\theta}}{2} , \] which shows that \[ \sup_{\theta \in [0,\bar{\theta}]} \anorm{(x_{\theta}^\star,y_{\theta}^\star)} < +\infty . \] Let us turn to $\lambda_{\theta}^\star$. When $\lambda_{\theta}^\star$ is chosen as in \eqref{eq:lambdaunit}, then we are done. When $\lambda_{\theta}^\star$ is the optimal dual multiplier satisfying \eqref{opt_system}, then it is a solution to the Fenchel-Rockafellar dual problem \[ \min_{\lambda \in \mathcal Z} F_\theta^*(-L^*\lambda) + \dotp{c}{\lambda} , \] where $F_\theta^*$ is the Legendre-Fenchel conjugate of $F_\theta$. Without loss of generality, we assume $c = 0$. Classical conjugacy results give \[ F_\theta^*(u) = F^*(u) + \frac{\theta}{2}\norm{u}^2 . \] Since $f$ and $g$ are convex and real-valued, the domain of $F$ is full. This is equivalent to coercivity of $F^*$. This together with injectivity of $L^*$ (see \ref{cond:Lsurj}), imply that there exists $a > 0$ and $b \in {\mathbb R}$ (potentially different from those above) such that \[ a\anorm{\lambda_{\theta}^\star} + b \leq F^*(-L^*\lambda_{\theta}^\star) \leq F_{\theta}^*(-L^*\lambda_{\theta}^\star) \leq F_{\theta}^*(-L^*\lambda^\star) \leq F^*(-L^*\lambda^\star) + \frac{\bar{\theta}}{2}\norm{L^*\lambda^\star}^2 < +\infty. \] Altogether this shows that \[ \sup_{\theta \in [0,\bar{\theta}]} \anorm{\lambda_{\theta}^\star} < +\infty . \] Combining the above with Theorem~\ref{ACFR_rescale_boundedness}, we conclude that for all $T > t_0$, the trajectories $(x_{\theta}(.),y_{\theta}(.),\lambda_{\theta}(\cdot))$ and the velocities $(\dot x_{\theta}(.),\dot y_{\theta}(.),\dot \lambda_{\theta}(\cdot))$ are bounded in $L^2(t_0,T;{\mathcal X} \times {\mathcal Y})$ uniformly in $\theta$. Since ${\mathcal X}$ and ${\mathcal Y}$ are finite dimensional spaces, we deduce by the Ascoli-Arzel\`{a} theorem, that the trajectories are relatively compact for the uniform convergence over the bounded time intervals. By properties of the Moreau envelope, we also have, for all $(x,y) \in {\mathcal X} \times {\mathcal Y}$, \[ \anorm{\nabla F_{\theta}(x,y)} \nearrow \anorm{\brac{\partial F(x,y)}^0} \text{ as } \theta \searrow 0, \] and thus \[ \anorm{\nabla F_{\theta}(x,y)} \leq \beta_F . \] Using this and the boundedness assertions of the trajectories and velocities proved above in the constitutive equations \eqref{eq:trialssmoothed}, the acceleration remains also bounded on the bounded time intervals. Passing to the limit as $\theta \to 0^+$ in \eqref{eq:trialssmoothed} is therefore relevant by a classical maximal monotonicity argument. Indeed, we work with the canonical extension of the maximally monotone operators $\nabla F_\theta$ and $\partial F$ to $L^2(t_0,T, {\mathcal X} \times {\mathcal Y})$, and, in this functional setting, we use that $\nabla F_\theta$ graph converges to $\partial F$ in the strong-weak topology. \qed \end{proof} We conclude this section by noting that at this stage, uniqueness of the solution to \eqref{eq:trialsnonsmooth} is a difficult open problem. In fact, even existence in infinite dimension and/or with any proper lower semicontinuous convex functions $f$ and $g$ is not clear. This goes far beyond the scope of the present paper and we leave it to a future work. \if { Differentiating ${\mathcal E}$ with respect to $t$ gives \begin{equation}\label{der-E-H} \dfrac{d}{dt}{\mathcal E} (t)=\dot{\delta}(t) {\mathcal F}_\mu(w(t))+ \delta (t) \dotp{\nabla {\mathcal F}_\mu(w(t))}{\dot{w}(t)}+ \left\langle v(t),\dot{v}(t) \right\rangle . \end{equation} By definition of $v(t)$, and by using the constitutive equation \eqref{eq:trials}, we have $$\begin{array}{lll} \dot{v}(t) & = & \gamma_0 \dot w (t) + \beta(t) \nabla_{\alpha}{\mathcal L}_\mu(w(t)) + t \Big( \ddot w (t) + \beta(t) \dfrac{d}{dt} \nabla_{\alpha}{\mathcal L}_\mu(w(t)) + \dot{\beta}(t) \nabla_{\alpha}{\mathcal L}_\mu(w(t)) \Big) \\ &=& \gamma_0 \dot w (t) + \beta(t) \nabla_{\alpha}{\mathcal L}_\mu(w(t)) + t \Big( -\frac{\gamma_0}{t}\dot w (t) -b(t)\nabla_{\alpha}{\mathcal L}_\mu(w(t)) + \dot{\beta}(t) \nabla_{\alpha}{\mathcal L}_\mu(w(t)) \Big)\\ &=& t\left(\dot{\beta}(t) + \frac{\beta (t)}{t} -b(t) \right) \nabla_{\alpha}{\mathcal L}_\mu(w(t)) . \end{array} $$ Elementary computation gives $$ \nabla_\alpha{\mathcal L}_\mu(w(t)) = \nabla {\mathcal F}_\mu(w(t)) + C(t) $$ where \begin{eqnarray*} C(t):=\left[ \begin{array}{l} A^*(\lambda (t)-\lambda^\star+\alpha (t) \dot\lambda (t)) \\ B^*(\lambda (t)-\lambda^\star+\alpha (t)\dot\lambda (t) ) \\ -A(x(t)+\alpha (t)\dot x (t))-B(y (t)+\alpha (t)\dot y (t))+c \end{array} \right]. \end{eqnarray*} Unambiguously, to shorten formulas, we sometimes omit the variable $t$. According to the above formulas for $ v(t) $ and $ \dot v(t) $, we get \begin{eqnarray*} \dotp{v(t)}{\dot{v(t)}} & =& t\left(\dot{\beta} + \frac{\beta}{t} -b \right) \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w), (\gamma_0 -1)( w-w^\star)+t\Big( \dot w + \beta \nabla_{\alpha}{\mathcal L}_\mu(w) \Big) \big\rangle\\ &=&(\gamma_0 -1) t\left(\dot{\beta} + \frac{\beta}{t} -b \right) \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w), w-w^\star \big\rangle\\ & + & t^2\left(\dot{\beta} + \frac{\beta}{t} -b \right) \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w), \dot w \big\rangle \\ &+& t^2\left(\dot{\beta} + \frac{\beta}{t} -b \right) \beta \| \nabla_{\alpha}{\mathcal L}_\mu(w) \|^2. \end{eqnarray*} \tcb{Let us} insert this expression in \eqref{der-E-H}. We obtain \begin{eqnarray*} \dfrac{d}{dt}{\mathcal E} (t)&=& \dot{\delta}(t) {\mathcal F}_\mu(w(t))+ \delta (t) \dotp{\nabla_\alpha{\mathcal L}_\mu(w))}{\dot{w}(t)}-\delta (t)\dotp{C(t)}{\dot{w}(t)} \\ &+&(\gamma_0 -1) t\left(\dot{\beta} + \frac{\beta}{t} -b \right) \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w), w-w^\star \big\rangle\\ & + & t^2\left(\dot{\beta} + \frac{\beta}{t} -b \right) \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w), \dot w \big\rangle + t^2\left(\dot{\beta} + \frac{\beta}{t} -b \right) \beta \| \nabla_{\alpha}{\mathcal L}_\mu(w) \|^2. \end{eqnarray*} Take $$ \delta (t) = t^2 \left(b(t) -\dot{\beta}(t) - \frac{\beta (t)}{t} \right) $$ so that the term $\big\langle\nabla{\mathcal F}_\mu(w ) , \dot w \big\rangle $ occurs with its opposite, and therefore disappears. Moreover we suppose that $\delta (t)$ is nonnegative. Thus, the above formula simplifies to \begin{eqnarray*} &&\dfrac{d}{dt}{\mathcal E} (t) + \beta (t)\delta (t) \| \nabla_{\alpha}{\mathcal L}_\mu(w) \|^2 + (\gamma_0 -1) \frac{\delta (t)}{t} \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w (t)), w(t)-w^\star \big\rangle - \dot{\delta}(t) {\mathcal F}_\mu(w(t))\\ &&\hspace{2cm} +\delta (t)\dotp{C(t)}{\dot{w}(t)} =0. \end{eqnarray*} By convexity of ${\mathcal F}_\mu$, we have \[ {\mathcal L}_\mu(w^\star)- {\mathcal L}_\mu(w (t)) \geq\dotp{\nabla_{\alpha}{\mathcal L}_\mu(w (t))}{w^\star-w(t)}, \] which, by definition of ${\mathcal F}_\mu$ gives $$ \big\langle \nabla_{\alpha}{\mathcal L}_\mu(w (t)), w(t)-w^\star \big\rangle \geq {\mathcal F}_\mu (w(t)). $$ Therefore \begin{eqnarray*} &&\dfrac{d}{dt}{\mathcal E} (t) + \beta (t)\delta (t) \| \nabla_{\alpha}{\mathcal L}_\mu(w) \|^2 + \Big((\gamma_0 -1) \frac{\delta (t)}{t} - \dot{\delta}(t)\Big) {\mathcal F}_\mu(w(t))+\delta (t)\dotp{C(t)}{\dot{w}(t)} =0. \end{eqnarray*} On the other hand, a similar computation as in Theorem 1 gives \[ \begin{array}{lll} \dotp{C(t)}{\dot{w}(t)}&=& \sigma \big\langle \lambda-\lambda^\star+\alpha \dot\lambda \, ,\, Ax-Ax^\star \big\rangle + \delta \big\langle\lambda-\lambda^\star+\alpha \dot\lambda , A\dot x \big\rangle \\ &&+ \sigma \big\langle \lambda-\lambda^\star+\alpha \dot\lambda \, ,\, By-By^\star \big\rangle + \delta \big\langle\lambda-\lambda^\star+\alpha \dot\lambda , B\dot y \big\rangle \\ &&- \sigma \langle A(x+\alpha \dot x)+B(y+\alpha \dot y)-c\, ,\, \lambda-\lambda^\star \rangle \\ &&- \delta \langle A(x+\alpha \dot x)+B(y+\alpha \dot y)-c\, , \dot \lambda \big\rangle . \end{array} \] \tcb{Let us} recall that $(x^\star,y^\star) \in {\mathcal X}\times{\mathcal Y} $ is a solution of \eqref{eq:P}. Hence $A(x^\star) + B(y^\star) =c$. According to this property, \tcb{Let us} rearrange ${\mathcal W}$ as follows: \[ \begin{array}{lll} {\mathcal W}&=& \sigma \big\langle \lambda-\lambda^\star+\alpha \dot\lambda \, ,\, Ax+By-c \big\rangle + \delta \big\langle\lambda-\lambda^\star+\alpha \dot\lambda , A\dot x + B\dot y\big\rangle \\ &&- \sigma \big\langle Ax+By-c\, ,\, \lambda-\lambda^\star \big\rangle - \sigma\alpha \big\langle A \dot x+B \dot y\, ,\, \lambda-\lambda^\star \big\rangle \\ &&- \delta \big\langle Ax+By-c\, ,\dot \lambda \big\rangle - \delta\alpha \big\langle A \dot x+B \dot y\, , \dot \lambda \big\rangle \\ &=& (\sigma \alpha - \delta)\Big[ \big\langle Ax+By-c\, ,\dot \lambda \big\rangle - \big\langle A \dot x+B \dot y\, ,\, \lambda-\lambda^\star \big\rangle \Big]. \end{array} \] Since it is difficult to control the sign of the above expression, we are naturally led to make the choice: for all $t\geq t_0$ \begin{equation} \label{basic_choice} \delta (t)= \sigma(t) \alpha (t), \end{equation} which gives ${\mathcal W} =0$. \tcb{Let us} return to \eqref{basic-Lyap1}. Collecting the above results, we get \begin{eqnarray*} \dfrac{d}{dt}{\mathcal E} +\left( \delta b \sigma-\dfrac{d}{dt}(\delta^2b) \right){\mathcal F}_\mu(w) \leq \big(\frac12\dot\xi + \sigma\dot\sigma \big) \|w-w^\star\|^2 + [\sigma-\dot\delta-\delta\gamma]\delta \|\dot w\|^2. \end{eqnarray*} Thus, assuming the conditions \begin{eqnarray} && \sigma-\dot\delta-\delta\gamma \leq 0 \label{def:sigma2-H} \\ &&\sigma\dot\sigma+\frac12\dot\xi\leq 0 \label{def:sigma1-H} \end{eqnarray} we obtain the inequality \begin{equation}\label{basic-Liap-22-H} \dfrac{d}{dt}{\mathcal E} +\left( \delta b \sigma-\dfrac{d}{dt}(\delta^2b) \right){\mathcal F}_\mu(w) \leq 0. \end{equation} The sign of ${\mathcal F}_\mu (w)(t)$ is a priori unknown, because of the term $\langle\lambda^\star , Ax(t)+By(t)-c\rangle $ which comes within its definition.Therefore, we are led to assume that \begin{equation}\label{def:sigma3-H} \delta b \sigma-\dfrac{d}{dt}(\delta^2b)= 0. \end{equation} Then, inequality \eqref{basic-Liap-22-H} gives $\dfrac{d}{dt}{\mathcal E}(t)\leq 0$ on $(t_0,+\infty)$. That's the basic ingredient of the Lyapunov analysis. } \fi \section{The uniformly convex case}\label{sec:strongly_convex} We now turn to examine the convergence properties of the trajectories generated by \eqref{eq:trials}, when the objective $F$ in \eqref{eq:P} is uniformly convex on bounded sets. Recall, see {\it e.g.}\,\, \cite{BauschkeCombettes}, that $F: {\mathcal X} \times {\mathcal Y} \to {\mathbb R}$ is uniformly convex on bounded sets if, for each $r > 0$, there is an increasing function $\psi_r: [0,+\infty[ \to [0,+\infty[$ vanishing only at the origin, such that \begin{equation}\label{eq:uniformconv} F(v) \geq F(w) + \dotp{\nabla F(w)}{v-w} + \psi_r(\anorm{v-w}) \end{equation} for all $(v,w) \in ({\mathcal X} \times {\mathcal Y})^2$ such that $\anorm{v} \leq r$ and $\anorm{w} \leq r$. The strongly convex case corresponds to $\psi_r(t) = c_F t^2/2$ for some $c_F > 0$. In finite dimension, strict convexity of $F$ entails uniform convexity on any non-empty bounded closed convex subset of ${\mathcal X} \times {\mathcal Y}$, see \cite[Corollary~10.18]{BauschkeCombettes}. \begin{theorem} \label{thm:convergence_s} Suppose that $F$ is uniformly convex on bounded sets, and let $(x^\star, y^\star)$ be the unique solution of the minimization problem \eqref{eq:P}. Assume also that ${\mathscr S}$, the set of saddle points of ${\mathcal L}$ in \eqref{eq:minmax} is non-empty. Suppose that the conditions \ref{cond:G1+}--\ref{cond:G4} on the coefficients of \eqref{eq:trials} are satisfied for all $t \geq t_0$. Then, each solution trajectory $t\in [t_0, +\infty[ \mapsto (x(t),y(t),\lambda(t))$ of \eqref{eq:trials} satisfies, $\forall t \geq t_0$, \[ \psi_r\pa{\anorm{(x(t),y(t))-(x^\star,y^\star)}} = {\mathcal O} \pa{ \frac{1}{\alpha(t)^2 \sigma(t)^2 b(t)}}. \] As a consequence, assuming that $\lim_{t\to +\infty} \alpha(t)^2 \sigma(t)^2 b(t) = +\infty$, we have that the trajectory $t\mapsto (x(t), y(t))$ converges strongly to $(x^\star, y^\star)$ as $t\to +\infty$. \end{theorem} \begin{proof} Uniformly convex functions are strictly convex and coercive, and thus $(x^\star,y^\star)$ is unique. From Theorem~\ref{ACFR_rescale_boundedness}\ref{ACFR_rescale_boundedness:itemii}, there exists $r_1 > 0$ such that \[ \sup_{t \geq t_0} \norm{(x(t),y(t))-(x^\star,y^\star)} \leq r_1 . \] Taking $r \geq r_1 + \norm{(x^\star,y^\star)}$, we have that the trajectory $(x(\cdot),y(\cdot))$ and $(x^\star,y^\star)$ are both contained in the ball of radius $r$ centered at the origin. Let $\lambda^\star$ be a Lagrange multiplier of problem \eqref{eq:P}, {\it i.e.}\,\, $(x^\star,y^\star,\lambda^\star) \in {\mathscr S}$. On the one hand, applying the uniform convexity inequality \eqref{eq:uniformconv} at $v=(x(t),y(t))$ and $w=(x^\star,y^\star)$, we have \begin{multline*} F(x(t),y(t)) \geq F(x^\star,y^\star) + \dotp{\nabla F(x^\star,y^\star)}{(x(t),y(t))-(x^\star,y^\star)} \\ + \psi_r\pa{\anorm{(x(t),y(t))-(x^\star,y^\star)}} . \end{multline*} On the other hand, the optimality conditions \eqref{opt_system} tells us that \[ \dotp{\nabla F(x^\star,y^\star)}{(x(t),y(t))-(x^\star,y^\star)} = -\dotp{\lambda^\star}{Ax(t)+By(t))-c} \] and obviously \[ F(x^\star,y^\star) = {\mathcal L}(x^\star,y^\star,\lambda^\star) . \] Thus, \begin{eqnarray*} \psi_r\pa{\anorm{(x(t),y(t))-(x^\star,y^\star)}} \leq {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) . \end{eqnarray*} Invoking the estimate in Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb}\ref{theoACFRrescale:itembi} yields the claim. \qed \end{proof} \begin{remark}\label{rem:convergence_s} The assumption $\lim_{t\to +\infty} \alpha(t)^2 \sigma(t)^2 b(t) = +\infty$ made in the above theorem is very mild. It holds in particular in all the situations discussed in Section~\ref{sec:particular}. In particular, for $\alpha (t) = t^r$, $0\leq r<1$, $\sigma$ constant, and $b(t)= \frac{1}{t^{2r}} \exp \bpa{\frac{1}{1-r}t^{1-r}}$, one has $\alpha(t)\sqrt{b(t)}= \exp\bpa{\frac{1}{2(1-r)}t^{1-r}}$. Thus, if $F$ is strongly convex, the trajectory $t \mapsto (x(t),y(t))$ converges exponentially fast to the unique minimizer $(x^\star,y^\star)$. \end{remark} \if { \begin{remark} Let us verify on an elementary situation that indeed under the assumptions of theorems 1 and 2, the coupling term is effective to ensure the convergence of each trajectories towards an equilibrium. Consider $ f = g = $ 0, and $ A = -B = I $, $ c = 0 $ in which case there is a continuum of solutions which is the entire space. The constraint is equivalent to $ x = y $. The system \eqref{eq:trials} is written \begin{equation*} \left\{\begin{array}{lll} \; \ddot x+\gamma (t) \dot x + b(t) \Big( \lambda + \alpha(t) \dot\lambda + \mu (x-y) \Big) &=&0 \\ \; \ddot y+\gamma (t)\dot y - b(t)\Big( \lambda + \alpha(t) \dot\lambda + \mu (x-y) \Big) &=&0 \\ \; \ddot \lambda+\gamma (t)\dot \lambda - b(t) \Big( x + \alpha(t)\dot x -(y + \alpha(t)\dot y) \Big)&=&0. \end{array}\right. \end{equation*} \end{remark} By adding the first two equations, and defining $s=x+y$, we get $$ \ddot{s}+\gamma (t) \dot{s}=0. $$ So, under the condition $(H_0)$, we get that the limit of $s(t)=x(t)+y(t)$ exists, as $t\to +\infty$. Moreover by Theorem 1, we know that the limit of $Ax(t)+By(t)$ is equal to zero, under the assumption $\lim_{t\to +\infty} \alpha(t)^2 \sigma(t)^2 b(t)=+\infty$. According to $A=-B=I$ this implies that the limit of $x-y$ is zero. Thus $x(t)+y(t)$ and $x(t)-y(t)$ converge, which implies that $x(t)$ and $y(t)$ converge, with the same limit (since their difference tends to zero). On the other hand, even in this elementary situation, the convergence of $\lambda(t)$ is a non-trivial question. } \fi \section{Parameters choice for fast convergence rates}\label{sec:particular} In this section, we suppose that the solution set ${\mathscr S}$ of the saddle value problem \eqref{opt_system} is non-empty, so as to invoke Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb}, Theorem~\ref{ACFR_rescale_boundedness} and Theorem~\ref{Lyap_gen}. The set of conditions \ref{cond:G1+}, \ref{cond:G2}, \ref{cond:G3}, \ref{cond:G4} and \ref{cond:G5} imposes sufficient assumptions on the coefficients $\gamma, \, \alpha, \, b$ of the dynamical system \eqref{eq:trials}, and on the coefficients $\sigma, \delta, \xi$ of the function ${\mathcal E}$ defined in \eqref{eq:lyapcont}, which guarantee that ${\mathcal E}$ is a Lyapunov function for the dynamical system \eqref{eq:trials}. Let us show that this system admits many solutions of practical interest which in turn will entail fast convergence rates. For this, we will organize our discussion around the coefficient $\alpha$ as dictated by Theorem~\ref{Lyap_gen}. Indeed, the latter shows that the convergence rate of the Lagrangian values and feasibility is $\displaystyle{{\mathcal O}\pa{\exp\pa{-\int_{t_0}^t\frac{1}{\alpha(s)} ds}}}$. Therefore, to obtain a meaningful convergence result, we need to assume that \[ \int_{t_0}^{+\infty}\frac{1}{\alpha(s)} ds = +\infty. \] This means that the critical growth is $\alpha(t) = a t$ for $a > 0$. If $\alpha(t)$ grows faster, our analysis do not provide an instructive convergence rate. So, it is an essential ingredient of our approach to assume that $\alpha (t)$ remains positive, but not too large as $t\to +\infty$. In fact, the set of conditions \ref{cond:G1+}, \ref{cond:G2}, \ref{cond:G3}, \ref{cond:G4} and \ref{cond:G5} simplifies considerably by taking $\sigma$ a positive constant, and $\gamma\alpha -\dot \alpha$ a constant strictly greater than one. This is made precise in the following statement whose proof is immediate. \begin{corollary}\label{cor:paramchoice} Suppose that $\sigma \equiv \sigma_0$ is a positive constant, and $\gamma\alpha -\dot \alpha \equiv \eta>1$. Then the set of conditions \ref{cond:G1+}, \ref{cond:G2}, \ref{cond:G3}, \ref{cond:G4} and \ref{cond:G5} reduces to \begin{equation}\label{eq:allconds} b(1 + 2\eta - 2 \gamma \alpha) - \alpha \dot{b} = 0 \text{ and } \inf_{t\geq t_0}\alpha (t)>0. \end{equation} \end{corollary} Following the above discussion, we are led to consider the following three cases. \subsection{Constant parameter $\alpha$} Consider the simple situation where $\sigma \equiv \sigma_0 > 0$ and $\eta > 1$ in which case \ref{cond:G1+} reads $\gamma\alpha - \dot{\alpha} - 1 = \eta - 1 > 0$. Taking $\alpha \equiv \alpha_0$, a positive constant, yields $\gamma \equiv \frac{\eta}{\alpha_0}$ and \eqref{eq:allconds} amounts to solving \[ b - \alpha_0 \dot b = 0 , \] that is, $b(t)= \exp\pa{{\frac{t}{\alpha_0}}}$. Capitalizing on Corollary~\ref{cor:paramchoice}, and specializing the results of Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb} (or equivalently Theorem~\ref{Lyap_gen} according to Remark~\ref{rem:Lyap_gen}) and Theorem~\ref{ACFR_rescale_boundedness} to the current choice of parameters yields the following statement. \begin{proposition} \label{O-exp} Suppose that $\sigma \equiv \sigma_0 > 0$, $\eta > 1$, and that the coefficients of \eqref{eq:trials} satisfy: the functions $\alpha, \gamma$ are constant with \[ \alpha \equiv \alpha_0 >0, \; \gamma \equiv \frac{\eta}{\alpha_0}, \; b(t)= \exp\pa{{\frac{t}{\alpha_0}}}. \] Suppose that ${\mathscr S}$ is non-empty. Then, for any solution trajectory $(x(\cdot),y(\cdot),\lambda(\cdot))$ of \eqref{eq:trials}, the trajectory and its velocity remain bounded, and we have \begin{align*} {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) &= {\mathcal O}\pa{\exp\pa{-\frac{t}{\alpha_0}}}, \\ \norm{Ax(t)+By(t)-c}^2 &= {\mathcal O}\pa{\exp\pa{-\frac{t}{\alpha_0}}} , \\ -C_1\exp\pa{-\frac{t}{2\alpha_0}} \leq F( x(t),y(t))-F^\star &\leq C_2\exp\pa{-\frac{t}{\alpha_0}} , \\ \anorm{(\dot {x}(\cdot), \dot {y}(\cdot), \dot{\lambda}(\cdot))} &\in L^2([t_0,+\infty[) , \end{align*} where $C_1$ and $C_2$ are positive constants. \end{proposition} \subsection{Linearly increasing parameter $\alpha$}\label{sec:var} We now take $\sigma \equiv \sigma_0 > 0$, $\eta > 1$ and $\alpha(t)= \alpha_{0} t$ with $\alpha_{0} > 0$. Then \ref{cond:G1+} is satisfied and we have $\gamma(t) = \frac{\eta+\alpha_0}{\alpha_0 t}$. Condition~\eqref{eq:allconds} then becomes \[ b(t) (1 - 2\alpha_0) - \alpha_0 t \dot b(t) = 0, \] which admits $b(t)= t^{\frac{1}{\alpha_0}-2}$ as a solution. We then have $b \equiv 1$ for $\alpha_0=1/2$, while one can distinguish two regimes for its limiting behaviour with \[ \lim_{t \to +\infty} b(t) = \begin{cases} +\infty & \alpha_{0} < \demi , \\ 0 & \alpha_{0} > \demi . \end{cases} \] In view of Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb} and Theorem~\ref{ACFR_rescale_boundedness}, we obtain the following result. \begin{proposition} \label{O-1/t2} Suppose that $\sigma \equiv \sigma_0 > 0$, $\eta > 1$, and that the coefficients of \eqref{eq:trials} satisfy \[ \alpha(t)= \alpha_{0} t \mbox{ with } \alpha_{0} >0, \; \gamma(t) = \frac{\eta+\alpha_0}{\alpha_0 t}, \; b(t)= t^{\frac{1}{\alpha_0}-2}. \] Suppose that ${\mathscr S}$ is non-empty. Then, for any solution trajectory $(x(\cdot),y(\cdot),\lambda(\cdot))$ of \eqref{eq:trials}, the trajectory remains bounded, and we have the following convergence rates: \begin{align*} {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) &= {\mathcal O}\pa{\frac{1}{t^{\frac{1}{\alpha_0}}}}, \\ \norm{Ax(t)+By(t)-c}^2 &= {\mathcal O}\pa{\frac{1}{t^{\frac{1}{\alpha_0}}}} , \\ -\frac{C_1}{t^{\frac{1}{2\alpha_0}}} \leq F( x(t),y(t))-F^\star &\leq \frac{C_2}{t^{\frac{1}{\alpha_0}}} , \\ \anorm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t)} &= {\mathcal O}\pa{\dfrac1{t}} . \end{align*} where $C_1$ and $C_2$ are positive constants. \end{proposition} \subsection{Power-type parameter $\alpha$} Let us now take $\sigma \equiv \sigma_0 > 0$, $\eta > 1$ and consider the intermediate case between the two previous situations, where $\alpha (t) = t^r$, $0<r<1$. Thus \ref{cond:G1+} is satisfied and we have $\gamma(t) = \frac{\eta}{t^r} + \frac{r}{t}$. Condition~\eqref{eq:allconds} is then equivalent to \[ b(t) (1- 2r t^{r-1} ) - t^r \dot b(t) = 0, \] which, after integration, shows that $b(t)= \frac{1}{t^{2r}} \exp\pa{ \frac{1}{1-r}t^{1-r}}$ is a solution. Appealing again to Theorem~\ref{ACFR,rescale}\ref{theoACFRrescale:itemb} and Theorem~\ref{ACFR_rescale_boundedness}, we obtain the following claim. \begin{proposition} \label{alpha-puissance} Take $\sigma \equiv \sigma_0 > 0$ and $\eta > 1$. Suppose that the coefficients of \eqref{eq:trials} satisfy \[ \alpha(t)= t^r \mbox{ with } 0<r<1, \; \gamma(t) = \frac{\eta}{t^r} + \frac{r}{t}, \; b(t)= \frac{1}{t^{2r}} \exp\pa{ \frac{1}{1-r}t^{1-r}} . \] Suppose that ${\mathscr S}$ is non-empty. Then, for any solution trajectory $(x(\cdot),y(\cdot),\lambda(\cdot))$ of \eqref{eq:trials}, the trajectory remains bounded, and we have the convergence rates: \begin{align*} {\mathcal L}(x(t),y(t),\lambda^\star) - {\mathcal L}(x^\star,y^\star,\lambda^\star) &= {\mathcal O}\pa{\exp\pa{-\frac{1}{1-r}t^{1-r}}}, \\ \norm{Ax(t)+By(t)-c}^2 &= {\mathcal O}\pa{\exp\pa{-\frac{1}{1-r}t^{1-r}}} , \\ -C_1\exp\pa{-\frac{1}{2(1-r)}t^{1-r}} \leq F( x(t),y(t))-F^\star &\leq C_2\exp\pa{-\frac{1}{1-r}t^{1-r}} , \\ \anorm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t)} &= {\mathcal O}\pa{\dfrac1{t^r}} . \end{align*} where $C_1$ and $C_2$ are positive constants. \end{proposition} \if{ \begin{remark} In the above results, we see again the trade-off between the damping parameter $\gamma$ and the extrapolation parameter $\alpha$. This phenomenon was observed by the authors in \cite{ACR-Optimization-2020} in the study of a third order dynamic for optimization. \end{remark} } \fi \subsection{Numerical experiments}\label{sec:numerics} To support our theoretical claims, we consider in this section two numerical examples with ${\mathcal X}= {\mathcal Y}= \mathcal Z= {\mathbb R}^2$, one with a strongly convex objective $F$ and one where $F$ is convex but not strongly so. \begin{enumerate}[label={\bf{Example}~\arabic*},itemindent=10ex] \item We consider the quadratic programming problem \[ \min_{(x,y) \in {\mathbb R}^4} F (x, y) = \anorm{x - (1,1)^\mathrm{T}}^2 + \anorm{y}^2 \quad \text{~subject to~} y = x + (-x_2,0)^\mathrm{T}, \] whose objective is strongly convex and verifies all required assumptions. \item We consider the minimization problem \[ \min_{(x,y) \in {\mathbb R}^4} F (x, y) = \log\pa{1+\exp\pa{-\dotp{(1,1)^\mathrm{T}}{x}}} + \anorm{y}^2 \quad \text{~subject to~} y = x + (-x_2,0)^\mathrm{T} . \] The objective is convex (but not strongly so) and smooth as required. This problem is reminiscent of (regularized) logistic regression very popular in machine learning. \end{enumerate} In all our numerical experiments, we consider the continuous time dynamical system \eqref{eq:trials}, solved numerically with a Runge-Kutta adaptive method (ode45 in MATLAB) on the time interval $[1, 20]$. \begin{figure} \caption{$\alpha(t) \equiv \alpha_0$} \caption{$\alpha(t) = \alpha_0 t$} \caption{$\alpha(t) = t^r$} \label{fig:trials} \end{figure} \begin{figure} \caption{$\alpha(t) \equiv \alpha_0$} \caption{$\alpha(t) = \alpha_0 t$} \caption{$\alpha(t) = t^r$} \label{fig:trialsstrong} \end{figure} For the solely convex (resp. strongly convex) objective, Figure~\ref{fig:trials} (resp. Figure~\ref{fig:trialsstrong}) displays the objective error $\abs{F(x(t),y(t))-F^\star}$ on the left, the feasibility gap $\anorm{Ax(t)+By(t)-c}$ in the middle, and the velocity $\anorm{(\dot {x}(t), \dot {y}(t), \dot{\lambda}(t))}$ on the right. In each figure, the first row shows the results for $\alpha(t) \equiv \alpha_0$ with $\alpha_0 \in \bra{1,2,4}$, the second row corresponds to $\alpha(t)=\alpha_0 t$ with $\alpha_0 \in \bra{0.25,0.5,1}$ and the third row to $\alpha(t)=t^r$ with $r \in \bra{0.01,0.1,0.5}$. In all our experiments, we set $\mu=10$ (recall that $\mu$ is the parameter associated with the augmented Lagrangian formulation). All these choices of the parameters comply with the requirements of Propositions~\ref{O-exp}, \ref{O-1/t2} and \ref{alpha-puissance}. The numerical results are in excellent agreement with our theoretical results, where the values, the velocities and the feasibility gap all converge at the predicted rates. \if { \begin{figure} \caption{\eqref{eq:trials}} \label{fig:trials} \end{figure} } \fi \if { \noindent \textbf{Example 1} Here $ \alpha = 10^{- 0.1} $ and $ \mu = 10^{- 5} $. In this example, we treat the case with only one variable: $$ \min F (x, y) = f (x) + g (y) = \frac12 (x-1)^2 + \frac12y^2 \mbox{ subject to }: Ax + By = x - y = 1 = c $$ We notice that : $\bullet$ The convergences of values, constraints and solutions are of the same type, although the values are almost twice as fast as the others. $\bullet$ From a certain order in time (which is fast) the variations become unchanged. \noindent \textbf{Example 2} Here $ \alpha $ varies and $ \mu = 10^{- 5} $. This example is similar to Example 1. Note that the variation of $ \alpha $ acts on the three estimates of the values, the constraints and the solutions. More and more $ \alpha $ grows, the variations decrease up to a certain order (here $ \alpha = 10 ^ {- 0.05} $), and then increase until exploding when $ \alpha \geq 10^{-2} $. \noindent \textbf{Example 3} Here it is similar to Example 2, but with another two-variable function: $$ \min F (x, y) = f (x) + g (y) = \frac12 ( x_1^2 + x_2^2) - \ln (y_1 y_2) $$ subject to: $ Ax + By = c $ translated by $ x_1 + x_2-y_1 = -1 $ and $ x_1-x_2 + y_1 = 1 $ with $ y_1 , y_2> $ 0. \noindent \textbf{Example 4} Similar to Example 3. \noindent \textbf{Example 5} Here we deal with the problem $$ \min F (x, y) = f (x) + g (y) = \frac12 \| x- (1,1)^* \|^2 ) + \| y \|^2 $$ subject to: $ Ax + By = x - y = (1,1)^* = c $. Note that for two variables the convergences are similar to those in Example 1 with only one variable. \noindent \textbf{ Example 6} Let us return to the situation considered in Example 5, with a similar problem without constraints $ \min f (x) = \| x- (1,1)^* \|^2 $. A similar dynamic system provides a faster convergence rate. Therefore the numerical resolution of the systems associated with problems under constraints is more hampered in time and in estimation than that for those without constraints. This is normal, given that the constraints impose multipliers which make the digital processing more troublesome. } \fi \if { \begin{remark} Let us verify on an elementary situation that indeed under the assumptions of theorems 1 and 2, the coupling term is effective to ensure the convergence of each trajectories towards an equilibrium. Consider $ f = g = $ 0, and $ A = -B = I $, $ c = 0 $ in which case there is a continuum of solutions which is the entire space. The constraint is equivalent to $ x = y $. The system \eqref{eq:trials} is written \begin{equation*} \left\{\begin{array}{lll} \; \ddot x+\gamma (t) \dot x + b(t) \Big( \lambda + \alpha(t) \dot\lambda + \mu (x-y) \Big) &=&0 \\ \; \ddot y+\gamma (t)\dot y - b(t)\Big( \lambda + \alpha(t) \dot\lambda + \mu (x-y) \Big) &=&0 \\ \; \ddot \lambda+\gamma (t)\dot \lambda - b(t) \Big( x + \alpha(t)\dot x -(y + \alpha(t)\dot y) \Big)&=&0. \end{array}\right. \end{equation*} \end{remark} By adding the first two equations, and defining $s=x+y$, we get $$ \ddot{s}+\gamma (t) \dot{s}=0. $$ So, under the condition $(H_0)$, we get that the limit of $s(t)=x(t)+y(t)$ exists, as $t\to +\infty$. Moreover by Theorem 1, we know that the limit of $Ax(t)+By(t)$ is equal to zero, under the assumption $\lim_{t\to +\infty} \alpha(t)^2 \sigma(t)^2 b(t)=+\infty$. According to $A=-B=I$ this implies that the limit of $x-y$ is zero. Thus $x(t)+y(t)$ and $x(t)-y(t)$ converge, which implies that $x(t)$ and $y(t)$ converge, with the same limit (since their difference tends to zero). On the other hand, even in this elementary situation, the convergence of $\lambda(t)$ is a non-trivial question. } \fi \section{Conclusion, perspectives}\label{sec:conclusion} In this paper, we adopted a dynamical system perspective and we have proposed a second-order inertial system enjoying provably fast convergence rates to solve structured convex optimization problems with an affine constraint. One of the most original aspects of our study is the introduction of a damped inertial dynamic involving several time-dependent parameters with specific properties. They allow to consider a variable viscosity coefficient (possibly vanishing so making the link with the Nesterov accelerated gradient method), as well as variable extrapolation parameters (possibly large) and time scaling. The analysis of the subtle and intricate interplay between these objects together has been made possible through Lyapunov's analysis. It would have been quite difficult to undertake such an analysis directly on the algorithmic discrete form. On the other hand, as we have now gained a deeper understanding with such a powerful continuous-time framework, we believe this will serve us as a guide to design and analyze a class of inertial ADMM algorithms which can be naturally obtained by appropriate discretization of the dynamics \eqref{eq:trials}. Their full study would go beyond the scope of this paper and will be the subject of future work. Besides, several other open questions remain to be studied, among which, the introduction of geometric damping controlled by the Hessian, and the convergence of the trajectories in the general convex constrained case. \end{document} \end{document}
arXiv
Number of atoms per unit cell I don't understand the concept of unit cell and the number of atoms per unit cell in a cubic lattice also the calculations for the number of atoms. For example in the $\ce{fcc}$ lattice, number of atoms per unit cell is: $$8\cdot\frac{1}{8} + 6\cdot\frac{1}{2}=4$$ what does the 2 and 8 in the denominator stand for? also 4? crystal-structure solid-state-chemistry crystallography Wafaa J. GhWafaa J. Gh The denominator signifies the number of cubes that are needed to completely encompass the whole point. For example, a corner point can be thought of as a center of 8 whole cubes, while a face centre is enclosed by 2 cubes and an edge center by 4. Hence, only 1/8 of a corner atom is in a specific unit cell and so on and so forth. Consequently, the total number of atoms in a unit cell (say a FCC) would be equal to - (no of corners)(fraction of corner in the unit cell) = 8(1/8) (no of face centers)(fraction of face center in the unit cell) = 6(1/2) which equals to 4 AyushmaanAyushmaan $\begingroup$ It should be noted that this answer (and the question!) only make sense for a unit cell which contains an atom in the corner and one on each symmetry-equivalent face. You might then translate the unit cell (or the atoms) to show how one can arrive at the end result of four in a different way. $\endgroup$ – Jan Oct 8 '17 at 15:59 Not the answer you're looking for? Browse other questions tagged crystal-structure solid-state-chemistry crystallography or ask your own question. How do I calculate lattice points per unit volume? Confusion in unit cells of crystal system Hexagonal unit cells Unit cell of CdCl2 Why is a face-centred cubic unit cell not regarded as equivalent to a body-centred tetragonal unit cell? Solid state and packing How many crystal forms of HCl have been identified - where to find the unit cells/lattice constants? How to determine the edge length of the aluminium unit cell? Does a molecule need to be placed symmetrically in the unit cell? How can I find the smaller symmetric structure from big crystal unit cell?
CommonCrawl
In how many ways can 81 be written as the sum of three positive perfect squares if the order of the three perfect squares does not matter? Since we are partitioning 81 into sums of perfect squares, we proceed by subtracting out perfect squares and seeing which work: $81 - 64 = 17 = 16 + 1$. Further, $81 - 49 = 32 = 16+ 16$. And finally, $81 - 36 = 45 = 36 + 9$. Although there is more to check through, this sort of method should convince us that these are the only $\boxed{3}$ solutions: $1^2 + 4^2 + 8^2 = 81$, $4^2 + 4^2 + 7^2 = 81$, and $3^2 + 6^2 + 6^2 = 81$.
Math Dataset
\begin{document} \title{Efficient classical simulation of cluster state quantum circuits with alternative inputs} \author{Sahar Atallah $^1$, Michael Garn $^{1,}$\footnote{[email protected]}, Sania Jevtic $^2$, Yukuan Tao $^{3,}$\footnote{[email protected]}, and Shashank Virmani$^{1,}$\footnote{[email protected]}} \affiliation{$^1$Department of Mathematics, Brunel University London, Kingston Ln, Uxbridge, UB8 3PH, United Kingdom, $^2$ Phytoform Labs Ltd., Lawes Open Innovation Hub, West Common, Harpenden, Hertfordshire, England, AL5 2JQ, United Kingdom, $^3$Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire, 03755, USA} \date{\today} \begin{abstract} We provide new examples of pure entangled systems related to cluster state quantum computation that can be efficiently simulated classically. In cluster state quantum computation input qubits are initialised in the `equator' of the Bloch sphere, $CZ$ gates are applied, and finally the qubits are measured adaptively using $Z$ measurements or measurements of $\cos(\theta)X + \sin(\theta)Y$ operators. We consider what happens when the initialisation step is modified, and show that for lattices of finite degree $D$, there is a constant $\lambda \approx 2.06$ such that if the qubits are prepared in a state that is within $\lambda^{-D}$ in trace distance of a state that is diagonal in the computational basis, then the system can be efficiently simulated classically in the sense of sampling from the output distribution within a desired total variation distance. In the square lattice with $D=4$ for instance, $\lambda^{-D} \approx 0.056$. We develop a coarse grained version of the argument which increases the size of the classically efficient region. In the case of the square lattice of qubits, the size of the classically simulatable region increases in size to at least around $\approx 0.070$, and in fact probably increases to around $\approx 0.1$. The results generalise to a broader family of systems, including qudit systems where the interaction is diagonal in the computational basis and the measurements are either in the computational basis or unbiased to it. Potential readers who only want the short version can get much of the intuition from figures 1 to 3. \end{abstract} \maketitle \section{Introduction and summary of main results} An important open problem in quantum computing is to understand when quantum systems can or cannot be efficiently simulated classically. The observation that classical simulation methods fail to be efficient for generic quantum systems was the original motivation for quantum computation \cite{Preskill}. While there are rigorous proofs of quantum computational advantage in certain settings such as communication complexity or with depth restrictions \cite{Raz,tomamichel2019quantum}, it is still in principle possible (however unlikely) that without such restrictions quantum computers can be efficiently simulated classically. The question has been given added impetus recently, as it is central to the discussion surrounding recent quantum supremacy experiments \cite{Google,Pan,napp2019efficient,noh2020efficient,AharonovGao}. Classical algorithms that have been proposed for simulating quantum systems often have a wide variety of aims, such as computing probabilities, estimating physical quantities in many-body systems, sampling observed distributions, or theoretical investigation of the sources of quantum advantage. Furthermore, different definitions of the phrase ``efficient classical simulation" are often used (see \cite{Hakop} for a recent discussion of various possibilities). However, in spite of this diversity of aims and motivations, certain themes repeatedly arise. This is because the classical algorithms that have been proposed often work by (a) singling out a special feature of quantum theory as one of the possible sources of non-classicality, and then (b) simulating systems for which that feature is limited. Some of the earliest examples of this approach highlighted quantum entanglement as the `special feature'. It was shown that in some settings a lack of quantum entanglement (perhaps due to noise) can lead to efficient classical simulation algorithms \cite{Jozsa,ABnoisy,HN}. Even if entanglement is present, but its structure is limited to being of low `width' or `tree-width', then methods with a tensor-network flavour can often be exploited to provide efficient classical simulations \cite{Nielsen1D, Yoran, markov, jozsa2006simulation,van2007classical,Hastings}. In the context of many-body physics, such limited entanglement may be exploited to compute physically important quantities \cite{Vidal,peps}. However, it is by now well known that entanglement is far from the full story. Seemingly weak amounts of entanglement can in fact be strong enough for quantum computation \cite{van2013universal,gross2007measurement}, and seemingly large amounts of quantum entanglement can be efficiently simulated classically. The well known Gottesman-Knill theorem \cite{GK}, for example, shows that stabilizer computations - which can demonstrate highly non-classical features such as non-locality - can be efficiently simulated classically without some form of additional `magic' \cite{KnillMagic,BK}. This idea has been expanded upon in several works (e.g. \cite{nest2008classical,jozsa2013classical}), and has applications such as providing classical simulations of systems with a small amount of `magic' \cite{BravyiRank,pashayan2021fast,seddon2021quantifying,qassim2021improved}, as well as providing upper bounds to fault tolerance thresholds \cite{VHP05,Buhrman}. Through the formalism of discrete Wigner functions there are also connections between the stabilizer formalism and quasi-probability distributions that arise in attempts to give quantum theory an alternative `realistic' description \cite{Galvao,GrossPhase}. A number of other (sometimes efficient) classical descriptions of quantum computation based on quasi-probability distributions or non-quantum operators have also been developed, e.g. \cite{Galvao,Pashayan2015WB,raussendorf2020phase,okay2021extremal,zurel2020hidden,RV,RVprep,AJRV1}. Our categorisation of the above classical algorithms into various themes is not objective, there are often connections between them, and moreover there are improvements or further insights to be gained by combining different approaches (e.g. \cite{gosset2020fast} combines Gottesman-Knill with tensor network methods, and \cite{schwarz2013simulating} building upon \cite{nest2011classical} shows that quantum advantage requires output distributions to be not too sparse). Another important class of classically efficiently simulatable quantum systems arises from Valiant's matchgate algorithm and extensions \cite{Valiant,Terhalferm,jozsa2008matchgates,Brod}. These algorithms provide efficient classical simulations of entangled systems that are strongly related to fermionic physics \cite{Terhalferm}. The work of \cite{Somma} generalised some of these ideas to what the authors termed {\it Lie Algebraic} models of computing (`LQC'). One of the themes of that work is an idea that the notion of entanglement can be generalised \cite{Barnum} to a new notion which is defined relative to a privileged set of observables, and in some situations this can be exploited to give classical simulation methods. A different type of generalised entanglement (although it is still a instance of the broad framework put forward in \cite{Barnum}) was utilised by some of us in \cite{RV,RVprep,AJRV1} to develop local hidden variable models and classical simulation methods in other situations. Our present work will be in a similar vein to these works. In this work we will exploit a version of generalised entanglement (or more precisely generalised separability) to provide new examples of (pure) multiparty entangled systems that can be efficiently simulated classically. All our results concern variations on cluster state quantum computing, and are summarised as follows: \begin{enumerate} \item We will begin by considering what happens when we vary the inputs of qubit cluster state quantum computing \cite{Raussendorf} architectures, keeping the permitted measurements the same. We will find that when the inputs are initialised not in the standard $\ket{+}$ states, but in states that are sufficiently close to a state that is diagonal in the computational basis, then the systems can be efficiently simulated classically in the sense of sampling the outcomes to arbitrary additive error in polynomial time (see footnote {\footnote{This means sampling in polynomial time from a probability distribution $p$ that satisfies $\|p-q\| \leq \epsilon$ where $q$ is the probability distribution of measurement outcomes on the system, $\epsilon > 0$ is an arbitrary fixed constant, and the norm is the total variation distance}} for a precise definition of this). This is in spite of the systems including pure multiparty entangled states containing a large amount of `magic' \cite{KnillMagic,BK}, and retaining several of the important features present in cluster state quantum computing (except for a particular form of non-locality - see later discussion). The key technical idea is a lemma showing that the control-$Z$ ($CZ$) interactions do not lead to a particular generalised form of entanglement. For our purposes this means the following: in the usual expression $\sum_i p_i \rho^A_i \otimes \rho^B_i$ for quantum separable states the local operators $\rho^A_i,\rho^B_i$ must be quantum states, but we will relax this, allowing the local operators $\rho^A_i, \rho^B_i$ to come from more general sets than the local quantum states, e.g. allowing some operators with negative eigenvalues. We will later see that if are careful to control this `negativity', we may exploit generalised separable decompositions, together with an existing method \cite{HN} for non-entangled quantum systems, to efficiently simulate certain input states. In particular, defining the radius $r$ of a unit trace $2 \times 2$ hermitian matrix $\rho$ as $r := \|\rho-\rho_{diag}\|$, where the norm is the trace norm and $\rho_{diag}$ is obtained from $\rho$ by setting off-diagonal elements to zero, our algorithm classically simulates efficiently when the inputs have $r \leq \lambda^{-D}$, where $\lambda \approx 2.06$ and $D$ is the maximum degree of the underlying graph. Note that the inputs can include non-quantum operators with negative eigenvalues, however it turns out that they do not lead to negative probabilities when used as inputs for the cluster state circuits that we consider. \item We then generalise the approach to measurement based quantum computation in what we term {\it privileged basis system} measurement based quantum comptutation (``PBS" for short) . While we define these systems precisely later, they are measurement based quantum computations in which the computational resource state (i.e. the analogue of the cluster state) is created by diagonal (in the computational basis) unitaries acting upon input particles placed on a lattice, and the allowed measurements are restricted in a particular way. This class of systems include the original cluster state scheme, as well as a number of other MBQC proposals (e.g. \cite{gross2007measurement,KissingerW,Tomo,MillerM, Hall}). For any such systems our results imply that that there is an analogue of $\lambda^{-1}$, i.e. there is a constant $c$ such that if the input particles are initialised from within a particular set of `size' (a term whose meaning will become clear later) $c^D$ around the diagonal states, then the systems can be efficiently simulated classically. We remark that the results demonstrate any PBS that is capable of non-classical computation for at least some input states (examples include \cite{gross2007measurement,KissingerW,Tomo,MillerM,Hall}) has at least one non-trivial ``computational transition": for all such systems there is a finite size convex region of inputs, including pure inputs, that can be efficiently simulated classically, even though for other inputs non-classical computation is possible. \item Borrowing a common paradigm from many-body physics, we then explore a `coarse grained' version of the simulation method for sufficiently regular lattices. By this we mean the following: we cut the system into blocks of qubits, treat each block as a single particle on a new lattice, and then construct a decomposition over these blocks that does not exhibit a particular generalised form of entanglement. It turns out that this process leads to classical simulation algorithms for an increased set of inputs, as well as some interesting mathematical structure. In particular, given a suitable lattice we parameterise the size of each block by a positive integer $n$ (the details of which we describe in section \ref{SectionCoarse}), and we find that that this leads to two convergent sequences $l_n$ and $u_n$ with the following properties: \begin{enumerate} \item $u_n$ and $l_n$ are the solutions to two families of optimisation problems, which are related to each other by a change of parameters, \item $u_n$ is non-increasing, $l_n$ is non-decreasing, and $u_n \geq l_n$, \item Inputs with radius $r < l := \lim_{n \rightarrow \infty} l_n$ can be efficiently simulated classically, \item For inputs with radius $r > u := \lim_{n \rightarrow \infty} u_n$, the particular notion of generalised separability that we use for our coarse graining approach breaks down, leading to an ill-defined sampling problem. The details of this discussion are left until later. \end{enumerate} In the case of the square 2D lattice and $CZ$ interactions we have numerically computed upper bounds to $u$ and lower bounds to $l$ and are quite certain that $0.0698 \leq l \leq u \leq 0.139$, but based on a conjecture and small scale numerical experiments we expect that in fact $0.0913 \leq l \leq u \leq 0.128$. These values should be compared to $\lambda^{-4} \approx 0.056$, which would be the size of classically simulatable region without coarse graining. \end{enumerate} \section{Prior Work and Context} A cluster state computation \cite{Raussendorf} in its original form proceeds by placing $\ket{\psi} = \ket{+} = (\ket{0}+\ket{1})/\sqrt{2}$ states on the vertices of a graph, interacting neighbouring qubits with control-$Z$ (henceforth denoted as $CZ$) gates, and then destructively measuring (i.e. measured qubits are not reused) in the $Z$ basis or the $XY$ plane (i.e. measurements of operators of the form $\cos(\theta) X + \sin(\theta) Y$). This has the power of BQP. What happens if $\ket{\psi}$ is replaced by another pure or mixed state state $\rho$? Two facts are immediate from the original scheme \cite{Raussendorf}: if $\rho$ corresponds to an equal weight superposition of $\ket{0}$ and $\ket{1}$, then the power remains that of BQP (this is because the computational power is trivially invariant under rotations about the $Z$ axis), and if $\rho=\ket{0}\bra{0}$ or $\rho=\ket{1}\bra{1}$, then the system can be efficiently simulated classically as the $CZ$ gates act trivially and so the final state is a product state. Previous works have looked at other input states, although sometimes in slightly different settings to the one considered in this work. In \cite{Terry,mora_universal_2010} for example, all local measurements are permitted, and moreover the measurements are permitted to be nondestructive (in this work we will only consider destructive measurements). Nevertheless, in that setting it was shown that for some graphs when $\rho$ is a pure or mixed state close enough to $\ket{+}$, quantum computation is still possible by performing filtering measurements that probabilistically distill out a perfect cluster state (for some graphs it is easy to adapt the approach to destructive measurements of the original cluster state form - we discuss this briefly in section \ref{SectionObstacles}). It was also shown that when there is sufficient noise - enough to prevent large scale quantum entanglement in a given graph - the systems can be efficiently simulated classically. Such considerations are also present in \cite{Dan}, albeit for a quite different noise model. The core idea that noise can destroy computationally useful entanglement goes back to the early years of quantum computing, see e.g. \cite{ABnoisy}. In the case of sufficiently noisy input states, other methods can also be used to provide classically efficiently simulatable regimes. For instance, enough dephasing will turn each $CZ$ gate in the cluster state circuit into one that does not generate quantum entanglement, thereby allowing the classical algorithm of \cite{HN} to be used. Alternatively, dephasing an ideal input $\ket{+}$ will effectively (by shifting the noise through to the measurements) turn the measurements into Clifford ones so that the Gottesman-Knill theorem \cite{GK} may be invoked. For any underlying graph if $\rho$ is a dephased $\ket{+}$ state, then the Gottesman-Knill theorem classically simulates once $\| \rho - \rho_{diag}\| \lessapprox 0.7$ (see e.g. \cite{VHP05}). However, all of these previous approaches need the inputs to be mixed or noisy in order to enter a classically efficient regime. This is where our work is differs most from previous literature: apart from trivial $\ket{0},\ket{1}$ inputs, or for systems with suitably restricted connectivity (such 1D, low width, or low tree-width systems \cite{Nielsen1D, Yoran, markov, jozsa2006simulation}, or when qubit loss significantly limits the size of clusters \cite{Dan}), previous works have required non-zero quantum entropy to bring on a classically efficient regime in the kinds of systems that we consider. In contrast, in this work we develop classically efficient simulation algorithms in which $\ket{\psi}$ can be taken to be both a {\it pure} or mixed state as long as it is close enough to diagonal in the computational basis. In order to achieve this we only allow the original cluster state measurements (i.e. {\it destructive} measurements $Z$ basis and $XY$ plane measurements). However, these measurements are still non-trivial because when the inputs are the ideal $\ket{+}$ states they are sufficient for quantum computation. To our knowledge no previous classical algorithm efficiently simulates the pure systems that our method can efficiently simulate. Moreover, the method we develop has a natural generalisation to a wide variety of other entangled states, and compared to most previous classical algorithms for these types of systems, our approach (at least the non-coarse-grained version) is less reliant on specific features of the underlying graph (such as percolation thresholds or tree-width) as it only cares about the degree of the graph (the coarse grained version of our argument does rely more on the graph structure). We note that while our methods are the only known efficient classical methods for the low entropy instances that we consider, for sufficiently noisy inputs with $CZ$ interactions an approach based on the Gottesman-Knill theorem is more powerful than the techniques presented in this work. The Gottesman-Knill theorem does not apply to our low entropy systems (because by magic state distillation \cite{BK}) they contain enough `magic' to enable quantum computation given access to arbitrary stabilizer computation), and furthermore the Gottesman-Knill theorem might not be be effective when the $CZ$ gate is replaced by other non-Clifford diagonal gates - as we consider when generalising Lemma 1 in section \ref{section_generalisation}. It is tempting to argue that results of the form that we develop in this article should be either expected or evident: one might argue that if $\rho$ is close to diagonal, e.g. if $\rho$ is a pure state of the form $\ket{0}+\epsilon\ket{1}$, with $\epsilon$ small, then even after the $CZ$ gates there will be little entanglement, and so the system is likely to be classically efficient. However, one has to be careful with this kind of reasoning for a few reasons. Firstly, states that are locally close to product states can still support measurement based quantum computation - see \cite{gross2007measurement} (similar statements are true for the gate model \cite{van2013universal} too). Secondly, it is conceivable, although we do not yet have a proof of this, that if all destructive local measurements are allowed (as opposed to restricting the measurements to the standard cluster state measurements) then some of the pure systems that we demonstrate to be classically efficiently simulatable might gain the ability to perform universal quantum computation or some form of non-classical computation (in the sense that an efficient classical sampling to additive error is not possible). If this turns out to be the case, then it would rule out any classical simulation method that does not exploit the restriction on measurements, and the intuitive idea that the systems are weakly quantum entangled would be false (in this context, in section \ref{SectionObstacles} we point out that allowing all local destructive measurements indeed does bring an additional power on some lattices - that of being able to create ideal cluster states under postselection, even when the inputs are very close to $\ket{0}$ or $\ket{1}$). Finally, even if the intuition is true that the kind of quantum entanglement we have in our systems is too limited to allow non-classical computation, then one still has the challenge of working out how far from diagonal the inputs can be while remaining classically efficiently simulatable. In this respect our method is technically appealing in that it gives rigorous quantitative bounds applicable to any lattice. One might speculate that some variant of a tensor network based method (e.g. \cite{van2007classical,Yoran,markov,jozsa2006simulation}) may be used to supply an efficient classical simulation of the systems that we consider. However, the entangled state resulting from inputs $\rho$ of the form $\ket{0}+\epsilon \ket{1}$ can be transformed to the ideal cluster state (as arises when $\rho$ is given by $\ket{+}$) by applying local linear transformations $A_{\epsilon} = \ket{0}\bra{0} + (1/\epsilon)\ket{1}\bra{1}$. As ideal cluster states are not likely to be classically efficiently simulatable, this suggest that any method working using some form of efficient tensor manipulation would need to exploit not just the tensor network structure, but some property resulting from the $A_{\epsilon}$s - perhaps the fact that acting with $A^{-1}_{\epsilon} = \ket{0}\bra{0} + \epsilon\ket{1}\bra{1}$ locally on each qubit of an ideal cluster state reduces correlations between different parts of the state. However, even if such an approach is possible (and it would certainly not be if allowing all local destructive measurements brought non-classical computational power), then we anticipate that it would probably need to exploit further structure in the underlying graph than just the degree, and would also likely be more technically challenging. This is because any classical simulation method is anticipated to fail for $\epsilon \approx 1$, and so any classical algorithm has to have a `phase transition' at which it fails. For our method we are able to rigorously identify classical regions without needing to invoke the typical kinds of technical machinery (e.g. percolation thresholds) that might be needed to identify a phase-transition. However, if a tensor network approach does turn out to work, it would have an advantage over our approach in that it would apply to all destructive (and possibly even non-destructive) local measurements, not just the cluster state measurements we consider here. Our work is part of a sequence of papers in which some of us have investigated a specific notion of generalised entanglement (a particular instance of the more general notions considered in \cite{Barnum}) to construct classical simulation algorithms and local hidden variable models \cite{RV,RVprep,AJRV1,AJRV2}. In \cite{AJRV1} a reasonably general construction is given that, given almost any set of restricted local measurements on local particles of high enough dimension ($\geq 16$), allows one to write down a $pure$ multiparticle pure entangled state that has a local hidden variable model for those measurements - something that would be impossible if all local measurements are allowed \cite{LoPop}. Moreover, some of those examples have the following property: they can be efficiently sampled classically by exploiting a type of generalised separability, but if all measurements are permitted they enable universal quantum computation, and hence no classical efficient simulation is likely unless it exploits the restricted measurements. While the examples of \cite{AJRV1} might be considered somewhat contrived as compared to the situations considered in this paper, together they demonstrate that there can be large scale, computationally significant, differences between generalised entanglement and the regular quantum version. The version of generalised entanglement that we use should be contrasted to more general versions developed in \cite{Barnum}, which may be briefly summarised as follows. In the conventional study of quantum entanglement one considers the comparison between a global system and subsystems, and global states are said to entangled if they cannot be described as convex mixtures of pure products of the individual subsystem states. In \cite{Barnum} this perspective is modified to consider comparisons between one algebra of observables and a subalgebra, or one convex set and a subset, accompanied by an appropriate definition for states of the system. By considering subalgebras or subsets rather than subsystems, their notion of generalised entanglement does not necessarily need a partition of the system into subsystems, in contrast to both the usual notion of quantum entanglement and the particular version of generalised entanglement that we will consider in this work. In the context of Lie algebras the viewpoint of \cite{Barnum} was adopted in \cite{Somma} to develop efficient classical algorithms for some quantum-entangled situations that generalise previously known fermionic \cite{TerhalD} systems. However, in spite of the fact that it also uses a notion of generalised entanglement to motivate a classical simulation algorithm, the Lie algebraic framework of \cite{Somma} does not apply to the situations we consider in this paper. The results we obtain have relationships to other foundational questions. Our classical simulation algorithms provide two types of hidden variable model for entangled pure states - the first is a local hidden variable model in the conventional sense, and the second (resulting from the `coarse grained' version) is a local hidden variable model where particles can communicate within certain blocks. Our approach also can be considered as an instance of a type of non-quantum theory which has certain non-classical features (such as an uncertainty principle for some measurements) but no entanglement. It is not quite a generalised probabilistic theory in the sense considered in \cite{Boxworld}, because in some situations it can lead to negative probabilities. However, it fits into the broad theme of computation in beyond-quantum theories that has been explored in recent works \cite{Lee}. \section{Cylindrical state spaces and preview of main techniques} Let us first explain what we mean by generalised entanglement. As the version of generalised entanglement that we need here is a special case of the broad framework developed in \cite{Barnum}, we will be more narrow and concrete than \cite{Barnum} in our description. One can consider modifying local state spaces so that they are not sets of quantum operators, but more general sets of operators that may only return valid probabilities under a restricted set of measurements of interest. Such new sets of operators can change our notion of entanglement, and this can lead to classical descriptions where they might otherwise be unexpected. Indeed, the well known fact that a Bell state such as $(\ket{00}+\ket{11})/\sqrt{2}$ has a local hidden variable model with respect to Pauli measurements can be reinterpreted as a statement that the state is separable w.r.t. cubes of operators \cite{RV} that arise in the study of discrete phase spaces \cite{GrossPhase}. We will consider the action of $CZ$ (control-$Z$) gates. When a $CZ$ gate acts on input pure product states that are not computational basis states, the output will be quantum entangled, in that it cannot be given a quantum separable decomposition of the form: \begin{equation} \sum_{i} p_i \rho^A_i \otimes \rho^B_i \end{equation} where $\rho^A_i$ and $\rho^B_i$ are local quantum states. However, we will instead allow the local operators to come from `cylindrical' state spaces, and this will change the class of states that we can consider separable. A `cylinder of radius $r$' is defined as the following set of normalised (i.e. unit trace) operators: \begin{equation} {\rm Cyl}(r) := \{ \rho |\rho=\rho^{\dag}, \mbox{tr}\{\rho\}=1, x^2 + y^2 \leq r^2, z \in [-1,1] \} \end{equation} where $x,y,z$ are the Bloch expansion coefficients of $\rho$, i.e. $\rho = (I + xX + yY + zZ)/2$, where $X,Y,Z$ are the Pauli operators and $I$ is identity. Visually this is a set of Bloch vectors drawn from cylinders of radius $r$, hence the name (see figure \ref{cylinderpic}). For $r>0$ these state spaces always contain non-quantum states, as whatever the value of $r>0$, they protrude from the Bloch sphere at the poles - for the same reason they always contain some pure qubit states. The cylinder sets can also be rewritten in terms of a norm: \begin{equation} {\rm Cyl}(r) := \{ \rho | \rho=\rho^{\dag}, \mbox{tr}\{\rho\}=1, \| \rho - \mathcal{D}_Z(\rho) \| \leq r \} \end{equation} where $\mathcal{D}_Z(\rho)$ is the dephasing of $\rho$ (i.e. with all off-diagonal elements replaced by zero) and the norm is the trace norm. We may also define ${\rm Cyl}(r)$ in terms of a dephasing transformation on ${\rm Cyl}(1)$ \begin{equation} {\rm Cyl}(r) := \{ \rho| \rho = r \sigma+(1-r)\mathcal{D}_Z\left(\sigma \right), \sigma \in {\rm Cyl}(1) \} \end{equation} (this version will be the one we will use for generalising our results to qudit systems). Much of the intuition behind the paper can be understood from the following part of Lemma 1: {\bf Lemma 1 (part of):} Consider a $CZ$ gate that acts on input operators that are drawn from `cylinders' of radius $r$. The output can be given a separable decomposition if the operators in the separable decomposition are drawn from cylinders of radius $\lambda r$, where $\lambda = \sqrt{{1 \over \sqrt{5} - 2}} \approx 2.06$. We call the constant $\lambda$ (and generalisations that we later describe) a {\it disentangling growth rate}, sometimes prefixing the word `cylinder' or `cylindrical' in order to explicitly emphasise the type of entanglement we are considering. As we shall now describe, we may combine Lemma 1 with classical algorithms that have been developed for systems with limited quantum entanglement, to obtain classical simulation algorithms for the entangled quantum systems that we consider in this work. If a state of multiple particles (say $A, B, C, ...$) is well approximated by a quantum separable decomposition (perhaps with some grouping into blocks of particles), \begin{equation} \label{qsep} \rho_{ABC...} \approx \sum_{i} p_i \rho^A_i \otimes \rho^B_i \otimes \rho^C_i \otimes ... \end{equation} then one could attempt to sample the outcomes of local measurements on the state in by first sampling the classical distribution $p_i$ and then sampling the outcomes of local measurements on the $i$th product state (which can be efficiently described by a linear number of small matrices). Although this approach seems plausible, it may fail to be efficient if the classical distribution $p_i$ cannot be efficiently sampled, or if the errors in any discrete approximations accumulate uncontrollably. In spite of this, the underlying intuition has been shown to work well in a variety of situations where the entanglement present in the system is very limited \cite{ABnoisy,Jozsa,HN}. Of course, entangled quantum states cannot usually be well approximated by a separable decomposition of the form of (\ref{qsep}) in the first place, and so for such systems one would not expect such an approach to classical simulation to work. However, for the variants on cluster state quantum computation considered in this work, we will see that one can write down a suitable cylinder-separable decomposition upon which an efficient classical simulation can be constructed. We give an informal overview in this section, technical details and generalisations to qudits are explained in later sections. Suppose that the inputs $\ket{\psi}$ to a finite depth circuit of $CZ$s are pure qubit quantum states drawn from within a cylinder of radius $r$. Each time we apply a $CZ$ gate in our circuit, on the basis of Lemma 1 we may update the state of the particles so that if the inputs at that point have radius $r_{in}$, they are replaced by a product of two new cylinder states of radius $r_{out} = \lambda r_{in} \approx 2.06 r_{in}$, which are sampled from the cylinder separable decomposition. Each time a qubit undergoes a $CZ$ interaction we update its state, and each time the radius grows by a factor of $\lambda$. In this way we always have a product decomposition of the system, as opposed to one involving exponentially large matrices. However this comes at a cost, the radius of the product operators in the decomposition will be \begin{equation} \lambda^D r \approx (2.06)^D r \end{equation} where $D$ is the degree of the cluster lattice (i.e. the maximum number of edges touching a vertex), and if $D$ is large these operators will be far from physical states. However, it turns out that we may still use these non-physical states to efficiently sample the allowed measurements (i.e. $Z$ and $XY$ plane measurements) using the algorithm of \cite{HN} provided that the radius does not grow beyond 1 for any particle in the system, because {Cyl(1)} corresponds to what is referred to as the (normalised) {\it dual} of the allowed measurements. The `dual' of a set $\Omega$ of operators is the set \begin{equation} \label{normdual} \Omega^*:=\{\sigma| \mbox{tr}\{\sigma^{\dag} \omega \}\geq 0, \forall \omega \in \Omega \}, \end{equation} so in physical terms the dual of a set of measurements is the set of operators that return positive probabilities for those measurements under the Born rule - in our case we will exclusively consider normalised duals, by which we also add the constraint that the operators are unit trace. All of the foregoing discussion means that provided: \begin{equation} \lambda^D r \leq 1 \,\,\,\,\, \Rightarrow \,\,\,\,\, r \leq {1 \over \lambda^D} \approx {1 \over (2.06)^D} \end{equation} we can efficiently simulate classically. The method, and many of the intuitions underlying it, generalise to a variety of other systems, as we discuss in dection \ref{section_generalisation}, and are amenable to a form of coarse graining as we discuss in section \ref{SectionCoarse}. This can significantly increase the size of the classically simulatable region for lattices with sufficient structure. \section{Structure of the paper} This paper is structured as follows. In section \ref{Mainlemma} we prove lemma 1. In section \ref{Classsim} we explain the classical simulation algorithm that exploits generalised separability. In section \ref{section_generalisation} we provide the generalisation of lemma 1 to various other systems, including ones in which the $CZ$ interactions are replaced by any other diagonal multi-qubit gates, and analogous systems using qudits (`privileged basis systems'). In section \ref{SectionCoarse} we explore a coarse grained version of the simulation method which increases the range of classically simulatable inputs, as well as bringing connections to the foundations of physics. In section \ref{SectionObstacles} we discuss obstacles facing attempts to classically simulate efficiently an increased range of qubit input states. We conclude in section \ref{SectionDiscussion} with a summary and discussion on the extent to which the results may be generalised further. The appendices contain some computations that we defer from the main text. Readers who only want the short version can get much of the intuition from figures 1 to 3. \begin{figure} \caption{This diagram illustrates a cylinder with $r=1$ in Bloch space. The sphere represents the Bloch Sphere. Our cylinders always extend the full height from $z=-1$ to $z=+1$, irrespective of radius. The unit cylinder with $r=1$ is the normalised dual of the permitted measurements (i.e. the set of normalised operators that yield positive probabilities for the allowed measurements).} \label{cylinderpic} \end{figure} \section{Maintaining a separable decomposition by increasing the radius} \label{Mainlemma} In order to develop the classically efficient simulation, we need to achieve two things. Firstly, we need to show how a generalised separable decomposition can be obtained, and then we need to argue that it can be efficiently simulated. The latter point follows almost immediately from the classical simulation method described in \cite{HN}, albeit with some efficiencies possible due to fact that our circuits have a simpler structure. We defer discussion of this to a later section. In this section we concentrate on the first task, by establishing the main technical tool that we will need to obtain a separable decomposition in terms of cylinder state spaces. The key observation that if a $CZ$ gate acts on two cylindrical state spaces, then the the output is separable w.r.t. two new cylindrical state spaces with larger radius. This is expressed by the following lemma: {\bf Lemma 1: (Cylinder disentangling growth rates)} Consider the set $CZ( {\rm Cyl}(r_A) \otimes {\rm Cyl}(r_B))$ of two qubit operators made by acting with a $CZ$ gate on ${\rm Cyl}(r_A) \otimes {\rm Cyl}(r_B)$. Any operator in $CZ( {\rm Cyl}(r_A) \otimes {\rm Cyl}(r_B))$ can be written in the generalised separable form: \begin{equation} \sum_i p_i \rho^A_i \otimes \rho^B_i \label{csep} \end{equation} where $\rho^A_i \in {\rm Cyl}(R_A)$ and $\rho^B_i \in {\rm Cyl}(R_B)$ if and only if: \begin{equation} \label{lem1} 1 \geq \left({r_A \over R_A}+{r_B \over R_B}\right)^2 + \left({r_A \over R_A}\right)^2\left({r_B \over R_B}\right)^2 \end{equation} We refer to an operator of the form of equation (\ref{csep}) as being ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable. Note that as the cylinders are the convex hulls of their extremal points, we may pick the $\rho^A_i$ and $\rho^B_i$ appearing in the decomposition (\ref{csep}) to have radii exactly equal to $R_A$ and $R_B$ respectively. \noindent Before we prove Lemma 1 let us discuss its interpretation. Let us define the ratios \begin{eqnarray} g_i := {R_i \over r_i} \end{eqnarray} and refer to them as `disentangling growth rates'. Roughly speaking, the lemma states that the $CZ$ can be interpreted as a gate giving separable output, provided that the radii of the output spaces are sufficiently large relative to those of the input spaces, i.e. provided that the disentangling growth rates are large enough. It may be helpful to note that the constraint (\ref{lem1}) is very closely approximated by the constraint one gets by disregarding the low magnitude fourth order terms: \begin{equation} 1 \gtrsim {r_A \over R_A}+{r_B \over R_B} \end{equation} In terms of growth factors this gives: \begin{equation} 1 \gtrsim {1 \over g_A}+{1 \over g_B} \end{equation} So if $g_A$ is small, then $g_B$ must be large, and vice versa. We will mostly consider the symmetric case where $g_A = g_B = g$. In this case equation (\ref{lem1}) becomes (exactly) \begin{eqnarray} 1 - {4 \over g^2} - {1 \over g^4} \geq 0 \nonumber \end{eqnarray} which can be solved to give (using the fact that $g \geq 0$ by definition anyway): \begin{equation} \label{lambda} g \geq \lambda := \sqrt{{1 \over \sqrt{5} - 2}} \approx 2.05817 \nonumber \end{equation} So we see that as long as the radii of the output spaces are roughly twice the input radii, then the $CZ$ can be considered a separable operation (see figure \ref{czgrowth}). \begin{figure} \caption{When we apply a $CZ$ (control-$Z$) operation to two input cylinders, the output can be given a separable decomposition with respect to `cylindrical' state spaces provided that the cylinder radius grows by $\lambda \approx 2.06$. In particular the $CZ$ be described as a probabilistic transformation mapping products of input cylinder operators with radius $r$ to products of output cylinder operators of radius $\lambda r$. We call this the `stochastic' representation. This means that if each qubit undergoes a finite number of $CZ$ gates, and if we start with inputs (which can be pure) drawn from a narrow enough radius, the overall output state can be represented as cylinder separable states with $r \leq 1$. This enables a classically efficient sampling algorithm because such operators return positive probabilities on the permitted measurements (in the $Z$ direction and $XY$ planes). The approach is amenable to a form of coarse graining and applies to all finite degree lattices consisting of (i) diagonal gates in the computational basis, (ii) local destructive measurements in the computational basis or in bases unbiased to it.} \label{czgrowth} \end{figure} \subsection{Proof of Lemma 1} Consider a two particle operator $\rho_{AB}$. We may expand it in the Pauli basis as \begin{equation*} \rho_{AB} = {1 \over 4}\sum_{i,j} \rho_{i,j} \sigma_i \otimes \sigma_j \end{equation*} where $\sigma_0 = I, \sigma_1 = X, \sigma_2=Y, \sigma_3= Z$ are the four Pauli matrices. Whenever expansion coefficients refer to a Pauli operator expansion we will use square brackets ``$[$'',``$]$'', reserving curved brackets ``$($'',``$)$'' for expansion coefficients in the computational basis or for basis independent descriptions. So for instance we will display the coefficients $\rho_{i,j}$ as a 4 x 4 matrix in square brackets, with rows and columns numbered from 0,..,3: \begin{eqnarray*}\left[\begin{array}{cccc} \rho_{00}=1 & \rho_{01} & \rho_{02} & \rho_{03} \\ \rho_{10} & \rho_{11} & \rho_{12} & \rho_{13} \\ \rho_{20} & \rho_{21} & \rho_{22} & \rho_{23} \\ \rho_{30} & \rho_{31} & \rho_{32} & \rho_{33} \end{array}\right] \end{eqnarray*} where we have assigned $\rho_{00}=1$ as we will consider normalised operators. When we are considering products of local normalised operators, we will use the notation (again with square brackets): \begin{equation*} [1, x_A, y_A, z_A] \otimes [1, x_B, y_B, z_B] \end{equation*} to denote the the product operator \begin{equation*} {1 \over 2} (\sigma_0 + x_A \sigma_1 + y_A \sigma_2 + z_A \sigma_3) \otimes {1 \over 2} (\sigma_0 + x_B \sigma_1 + y_B \sigma_2 + z_B \sigma_3) \end{equation*} Let us consider a two particle product state $\rho_A \otimes \rho_B$ where $\rho_A, \rho_B$ are drawn from two cylinders with radii $r_A$ and $r_B$ respectively. Our goal is to determine whether the output of a $CZ$ gate, acting on all such possible inputs, leads to a ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable state. With this goal in mind we only need to consider extremal points, because if the output from all extremal inputs is ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable, then the output will be separable for all inputs because the $CZ$ is linear. Furthermore, we may exploit the symmetry about the $Z$ axis, as follows. Suppose that we can provide an explicit ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable decomposition: \begin{equation*} CZ(\rho_A \otimes \rho_B) = \sum_{i} p_i \omega^i_A \otimes \omega^i_B \end{equation*} where $\omega^i_y \in {\rm Cyl}(R_y)$. Then because $CZ$ commutes with local $Z$ rotations $U_z$, and because the cylinders are invariant under $Z$ rotations, we automatically have the ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable decomposition \begin{equation*} CZ(U^A_z(\rho_A) \otimes U^B_z(\rho_B)) = \sum_{i} p_i U^A_z(\omega_A) \otimes U^B_z(\omega_B) \end{equation*} By exploiting this $Z$ rotation equivalence, w.l.o.g. we may restrict our attention to input products with expansions of the form $[1, r_{A/B},0,\pm 1]$. We may now make one further simplification. It is easy to verify that if the first input particle has $z=1$, and output is separable: \begin{equation*} CZ( [1, r_A,0,1] \otimes [1, r_B,0,\pm 1] ) = \sum_{i} p_i \omega^i_A \otimes \omega^i_B \end{equation*} then modifying the first input to have $z=-1$ gives another operator with a separable decomposition: \begin{equation*} CZ( [1, r_A,0,-1] \otimes [1, r_B,0,\pm 1] ) = \sum_{i} p_i X \omega^i_A X^{\dag} \otimes Z \omega^i_B Z^{\dag} \end{equation*} In this argument we could equally well have considered the second input instead, as the $CZ$ is symmetric. This means that we need only consider one input extremum: \begin{equation*} [1, r_A,0,1] \otimes [1, r_B,0,1] \end{equation*} and determine whether the output is ${\rm Cyl}(r_A),{\rm Cyl}(r_B)$-separable. Under the action of the $CZ$ gate this input transforms to: \begin{eqnarray} \label{output} \left[\begin{array}{cccc} 1 & r_B & 0 & 1 \\ r_A & 0 & 0 & r_A \\ 0 & 0 & r_A r_B & 0 \\ 1 & r_B & 0 & 1 \end{array} \right] \end{eqnarray} If this corresponds to a ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable operator, then it can be written as the outer product: \begin{eqnarray*} \sum_i p_i \left[\begin{array}{c} 1 \\ R_A \cos(\theta_i) \\ R_A \sin(\theta_i) \\ 1 \end{array} \right] \left[\begin{array}{cccc} 1 & R_B \cos(\phi_i) & R_B\sin(\phi_i) & 1 \end{array} \right] \end{eqnarray*} where the angles $\theta_i$ and $\phi_i$ indicate where on the top perimeter of the cylinder the local states lie. If we left multiply the previous two equations by \begin{eqnarray*} \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1/R_A & 0 & 0 \\ 0 & 0 & 1/R_A & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] \end{eqnarray*} and right multiply by \begin{eqnarray*} \left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1/R_B & 0 & 0 \\ 0 & 0 & 1/R_B & 0 \\ 0 & 0 & 0 & 1 \end{array} \right] \end{eqnarray*} we see that equation (\ref{output}) is ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable iff \begin{eqnarray} \label{output2} \left[\begin{array}{cccc} 1 & {r_B \over R_B} & 0 & 1\\ {r_A\over R_A} & 0 & 0 & {r_A\over R_A}\\ 0 & 0 & {r_Ar_B \over R_A R_B}& 0 \\ 1& {r_B\over R_B} & 0 & 1 \end{array}\right] \end{eqnarray} is ${\rm Cyl}(1),{\rm Cyl}(1)$-separable. We will now see that determining this is equivalent to checking the usual quantum separability of a two qubit quantum operator. This can be seen as follows. Observe that if \begin{eqnarray} \label{output3} \left[\begin{array}{cccc} 1 & \pm {r_B \over R_B} & 0 & 0\\ \pm {r_A\over R_A} & 0 & 0 & 0 \\ 0 & 0 & {r_Ar_B \over R_A R_B}& 0 \\ 0& 0 & 0 & 0 \end{array}\right] \end{eqnarray} has a quantum separable decomposition, \begin{equation*} \sum_i p_i [1, x^i_A,y^i_A,z^i_A] \otimes [1, x^i_B,y^i_B,z^i_B] \end{equation*} then (\ref{output2}) is ${\rm Cyl}(1),{\rm Cyl}(1)$-separable because it has decomposition \begin{equation*} \sum_i p_i [1, x^i_A,y^i_A,1] \otimes [1, x^i_B,y^i_B,1] \end{equation*} Moreover, by taking any ${\rm Cyl}(1),{\rm Cyl}(1)$-separable decomposition for equation (\ref{output2}) and setting $z^i_A=0$ and $z^i_B=0$ for all $i$, we recover a quantum-separable decomposition for (\ref{output3}). This means that (\ref{output}) is ${\rm Cyl}(r_A),{\rm Cyl}(r_B)$-separable if and only if (\ref{output3}) corresponds to a positive and PPT operator, so we may apply the PPT criterion \cite{PPT}. Written out explicitly, verifying this corresponds to checking that the minimal eigenvalues of the operator given by equation (\ref{output3}) \begin{equation*} I + ({r_A\over R_A}X \otimes I + I \otimes {r_B\over R_B}X) + {r_Ar_B \over R_A R_B} Y \otimes Y \end{equation*} and its partial transpose \begin{equation*} I + ({r_A\over R_A}X \otimes I + I \otimes {r_B\over R_B}X) - {r_Ar_B \over R_A R_B} Y \otimes Y \end{equation*} are non-negative. The eigenvalues of these operators can be found quite straightforwardly. We first note that these two operators can be interconverted by applying an $X$ transformation on the first qubit (as it changes $Y \otimes Y$ to $-Y \otimes Y$, but leaves the other terms alone). Hence the eigenvalues of the two operators are equivalent and so we only need to work out the eigenvalues of one equation, say the second one. For convenience we apply a Hadamard unitary to both qubits to give \begin{eqnarray} I + ({r_A\over R_A}Z\otimes I + I \otimes {r_B\over R_B}Z) - {r_Ar_B \over R_A R_B} Y \otimes Y \end{eqnarray} Explicitly, in the computational basis, this is the matrix \begin{eqnarray*} \left(\begin{array}{cccc} 1+f_A+f_B & 0 & 0 & f_Af_B \\ 0 & 1 + f_A- f_B & -f_Af_B & 0\\ 0 & -f_Af_B & 1-f_A+f_B & 0 \\ f_Af_B & 0 & 0 & 1-f_A-f_B \end{array}\right), \label{xy} \end{eqnarray*} where we have defined $f_A:=r_A/R_A$ and $f_B:=r_B/R_B$. The eigenvalues can be worked out on the inner and outer block separately. Both blocks have positive trace. The determinants of the inner and outer block are: \begin{eqnarray} 1 - (f_A-f_B)^2 - f_A^2f_B^2 \nonumber \\ 1 - (f_A+f_B)^2 - f_A^2f_B^2 \nonumber \end{eqnarray} respectively. As $f_A,f_B \geq 0$, the lowest of these is the outer determinant, so if the outer determinant is non-negative the output will be ${\rm Cyl}(R_A),{\rm Cyl}(R_B)$-separable. This completes the proof of Lemma 1. We remark that many of the ingredients of the proof can be applied to other control-phase gates on qubits. $\blacksquare$. It is useful to note that the separable action of the $CZ$ gate on cylinder states can be represented in quite an efficient way as a radius increase accompanied by a $z$-dependent probabilistic application of unitaries. We refer to this representation as the `{\it stochastic representation}', and describe it in an appendix as it is not essential for the majority of our discussion. \section{Classical simulation algorithm based upon uniform disentangling growth rates} \label{Classsim} Consider a quantum device consisting of $n$ quantum particles, initialised in a product state, undergoing a polynomial number of noisy quantum gates that do not generate any quantum entanglement, followed by local measurements of each particle. In \cite{HN} an efficient classical simulation method was proposed for such systems. Its output is a sample from a probability distribution that is within total variation distance $\epsilon$ of the probability distribution of the measurement outcomes, and it produces this output in time $O(poly(n)/\epsilon)$. We will combine the algorithm of \cite{HN} with the notion of cylinder separability to obtain a classically efficient simulation algorithm for the variants of cluster state computation that we consider in this paper. We remark that the term `{\it efficient}' in `{\it efficient classical simulation}' is used in a wide variety of ways in the literature (see \cite{Hakop} for a detailed discussion). Our classical algorithm inherits its performance from \cite{HN} and falls under the category of an `$\epsilon$-simulation' as defined by \cite{Hakop}. Let us first summarise the algorithm of \cite{HN} to make the presentation more self contained. We will explain it in the context of qubits/cylinders, as the version we will need for qudit systems proceeds analogously (apart from one technical consideration which we defer to the paragraph following equation (\ref{generalcyl})). In \cite{HN} each input qubit is represented by its Bloch vector, stored to $l$ bits of precision. We will discuss how $l$ is chosen shortly, however for now we just treat it as a parameter. Suppose w.l.o.g. that the first gate in the circuit is $\mathcal{E}$, acting upon qubits $A$ and $B$. In the actual quantum circuit this corresponds to a transformation of the form: \begin{equation} \label{gatestep} \mathcal{E}(\rho_A \otimes \rho_B) = \sum_i p_i \rho^i_A \otimes \rho^i_B \end{equation} due to the fact that $\mathcal{E}$ preserves separability. The algorithm represents this through a {\it gate simulation step} which takes as inputs $l$ bit approximations of $\rho_A$ and $\rho_B$ and constructs (through a brute force search over candidate decompositions, possible by Carath\'{e}odory's theorem) an approximation to the decomposition on the right hand side of equation (\ref{gatestep}), in which the $p_i$ and the Bloch vectors of $\rho^i_A, \rho^i_B$ are represented to $l$ bit precision. The algorithm then samples $i$ from the approximate $p_i$s, and then updates Bloch vectors of qubits $A,B$ with the $l$ bit approximations to the Bloch vectors of $\rho^i_A, \rho^i_B$ for the value of $i$ returned from that sampling. Hence the state of the system after the first gate, as represented by the algorithm, remains a product of approximate Bloch vectors for each qubit. The algorithm then repeats this gate simulation step for each subsequent gate, so that the state of each qubit is always stored as an $l$ bit approximation of a Bloch vector. At the end of the algorithm the measurement outcomes are sampled from the final product state using the Born rule. The analysis of \cite{HN} shows how to pick a value of $l$ of order $O(\log (poly(n)/\epsilon))$ (where $n$ is the number of gates in the circuit) such that their classical algorithm samples the quantum distribution to within $\epsilon$ while remaining polynomial time. In terms of applying the algorithm to our setting, we note that it works for any notion of separability provided that the state space is within the normalised dual of the permitted measurements, and provided that the extremal points of the state space are given to us with $l \sim O(\log (poly(n)/\epsilon))$ bits of precision. In some cases (as with the quantum state space or the qubit cylinder state spaces) the extremal points are explicit and this is straightforward. In later sections we will consider state spaces that are provided as the solutions to inequalities, in which case the extreme points must be constructed. However under mild conditions there are well known ways to do this that we later discuss (see discussion after equation (\ref{generalcyl})) Let us now describe explicitly the classical simulation based upon cylinders. Consider placing qubits at the nodes of a particular graph. Let us suppose that each qubit $i$ is initialised from within ${\rm Cyl}(r_i)$, where we will call $r_i$ the local `radius'. Now we consider applying a $CZ$ gate to two of the qubits, say qubits $1$ and $2$. The output remains cylinder separable, provided that we grow the cylinders in a way that respects equation (\ref{lem1}). Let us assume that we do this in a symmetric manner, i.e. we replace ${\rm Cyl}(r_1) \rightarrow {\rm Cyl}(\lambda r_1)$ and ${\rm Cyl}(r_2) \rightarrow {\rm Cyl}(\lambda r_2)$, using $\lambda$ as defined in equation (\ref{lambda}). We may apply the gate simulation step of \cite{HN}, except pure qubit states in the separable decomposition are now replaced by `cylinder states' from the surface of the output cylinder, e.g. of the form $[1, \lambda r_1 \cos(\theta), \lambda r_2 \sin(\theta), \pm 1]$. Continuing in this way we see that after all the $CZ$ gates have been applied, the output will remain cylinder separable provided that the cylinder spaces ${\rm Cyl}(r_i)$ are replaced with \begin{eqnarray} {\rm Cyl}(\lambda^D_i r_i) \end{eqnarray} where $D_i$ is the degree of node $i$ in the graph. Now, the cylinders are certainly not quantum state spaces. However, the cylinder space of unit radius, i.e. ${\rm Cyl}(1)$, is the dual of the measurements that are permitted. This means that provided the measurements are restricted to $Z$ measurements and measurements of the form $\cos(\theta)X + \sin(\theta) Y$, and provided that $\lambda^{d_i} r_i \leq 1$ for all $i$, then we can use the cylinder separable description as a way to sample the measurements efficiently, as the all the required ingredients of the \cite{HN} algorithm are met, only with cylinder separability rather than quantum separability. This means that provided that the initial qubits satisfy \begin{eqnarray} \| \rho_i - (\rho_i)_{diag} \| \leq {1 \over \lambda^{d_i}} \end{eqnarray} then the system can be efficiently simulated classically. In particular, if the maximum degree of any node is $D$, and if all qubits are initialised in the same state $\rho$, then we have the following theorem: \noindent {\bf Theorem 2:} If a quantum computation involves initialising $n$ qubits in state $\rho$ on the sites of a lattice, and interacting qubits joined by an edge with $CZ$ gates, then if the states $\rho$ satisfy: \begin{eqnarray} \| \rho - \rho_{diag} \| \leq {1 \over \lambda^D} \,\,\,\,\,\, \lambda := \sqrt{{1 \over \sqrt{5} - 2}} \approx 2.05817 \nonumber \end{eqnarray} where $D$ is the maximum degree of any node, then measurements in the $Z$ basis and the $X-Y$ plane can be sampled classically to within additive error $\epsilon$ in $O(poly(n,{1 \over \epsilon}))$ time. \noindent It is clear that this bound could be improved by using asymmetric growth factors if there is suitable further structure in the interaction graph. For instance, if a qubit has a larger degree than the qubits it is joined to, then with each $CZ$ one could apply a larger growth factor to the lower degree qubits and a lower growth factor to the higher degree qubits. Another way to exploit further structure in the graph is to use the idea of coarse graining from physics - we will explore this in a later section in the context of the 2D square lattice. We also remark that it would be possible to make significant efficiency savings by exploiting the fact that the two particle gates are always $CZ$s, so we could precompute the stochastic representation (described in Appendix A) of the $CZ$ gate to required accuracy once and then apply it repeatedly (as opposed to finding a decomposition for a two particle state after each gate has been applied). However, if one wishes to consider gates that may vary (a scenario we generalise to in the next section) then this would not be possible. The algorithm that we have proposed does not just simulate a type of quantum device, it also simulates some hypothetical devices that we will refer to as {\it cylindrical computers}, which act upon {\it cylindrical bits}: we define a cylindrical computer to be a device that places operators prepared in the extremal points of ${\rm Cyl}(r)$, for some $r>0$, at the vertices of a given lattice, then interacts them with $CZ$ gates, and measures in the $Z$ basis and $XY$ plane measurements. We discuss some properties of such `cylindrical computers' in section \ref{SectionCoarse}. \section{Generalisation of Lemma 1 to multi-particle gates diagonal in a computational basis} \label{section_generalisation} This section can be skipped by readers only interested in the coarse graining discussion of the next section, which can be mostly understood from the earlier discussion of qubits. The main ingredient of our discussion has been the observation that the output state from each $CZ$ gate is separable if we consider a cylindrical state space whose radius increases when the gate acts. In this section we show that this observation generalises to other systems that obey four conditions $(\alpha)-(\delta)$ described below. We will then show that these four assumptions are satisfied by system that we call {\bf privileged basis system} (PBS), defined as follows: \noindent {\bf Definition: Privileged Basis System} (PBS): A privileged basis system is a quantum circuit with the following properties. It consists of unitary gates (which may act on more than one particle) that are diagonal in a computational basis. Each particle undergoes only a finite number of gates. After the gates have been applied, destructive measurements are performed consisting of POVM elements of one of the following forms: \begin{enumerate} \item {\bf $Z$ measurement operators:} POVM elements proportional to rank-1 projectors in the computational basis $\{\ket{j}\}$ \item {\bf Equatorial operators:} POVM elements proportional to rank-1 projectors in any basis `unbiased' to the computational basis (i.e. projectors $\{P\}$ such that $\bra{j}P\ket{j} = 1/d$, where $d$ is the qudit dimension) \end{enumerate} If a measurement is in the computational basis then we will call it a {\bf $Z$ measurement}, if a measurement consists entirely of equatorial POVM elements then we will call it an {\bf `equatorial measurement'}. For such PBS systems there is an analogue of the `cylinder' of inputs that can be efficiently simulated classically, and on the basis of this one can write down many pure entangled systems that can be efficiently simulated classically. The interest in these sorts of systems stems from the variety of MBQC schemes that fall into this class. In addition to the original cluster state scheme, there are others that involve different diagonal gates and $Z$/equatorial measurements, or some subset of them - examples include weighted graph states \cite{gross2007measurement}, states built from more general control phase gates \cite{KissingerW}, states built from CCZ gates \cite{MillerM, Tomo}, and generalisations of the original cluster scheme to qudits \cite{Hall}. In all such systems there will be a similar `transition' as happens for the cluster systems considered in previous sections - at one extreme quantum computation is possible, but with particles initialised in an appropriate `cylinder' one can efficiently simulate classically. \noindent {\bf Conditions} $(\alpha)-(\delta)$ {\bf :} Suppose that we have several quantum particles undergoing a quantum gate $\mathcal{V}$ (where $\mathcal{V}$ is the superoperator corresponding to an underlying unitary matrix, i.e. $\mathcal{V}(\rho) = V\rho V^{\dag}$ for some unitary matrix $V$). Let us assume that associated to each particle $j \in 1,..,N$ there is an abelian group $G_j$ of unitaries. We will make four assumptions about this setup. The first three $(\alpha-\gamma)$ are as follows, the fourth $(\delta)$ will be explained shortly: \begin{enumerate} \item[($\alpha$)] Each group $G_j$ can be averaged over a Haar measure. Denote the resultant quantum operation (i.e. applying a unitary $U_j$ drawn randomly from the Haar measure) by $\mathcal{D}_j$, which can be considered to be a kind of dephasing operation: \begin{equation*} \mathcal{D}_j(\sigma) := \int U_j \sigma U^{\dag}_j dU_j \end{equation*} \item[($\beta$)] There is a set $\mathcal{M}_j$ of permitted POVM elements on particle $j$ which is invariant under $G_j$, i.e. $U\mathcal{M}_jU^{\dag} = \mathcal{M}_j$ for any $U \in G_j$. Here by `set of permitted POVM elements' we mean that any allowed complete measurement on a particle is formed from members of $\mathcal{M}_j$. For technical reasons we will additionally assume that the normalised dual of this set of measurements is bounded (see discussion below equation (\ref{generalcyl})). We do not need to distinguish between a given POVM element $M$ and $\nu M$ for any $\nu > 0$, so in fact each $\mathcal{M}_j$ can be considered a cone. We consider only destructive measurements - i.e. each particle is discarded after measurement. \item[($\gamma$)] The multi-particle gate $\mathcal{V}$ commutes with any product of unitaries drawn from $\bigotimes_j G_j$. \end{enumerate} On the basis of these assumptions, we will make the following definitions: \begin{enumerate} \item {\bf Phasing.} We will define a local linear operation, parameterised by a real parameter $r \geq 0$, that is a linear combination of the identity operation $\mathcal{I}$ (leaving inputs alone) and $\mathcal{D}_j$: \begin{equation} \mathcal{T}_j(r) := r\mathcal{I} + (1-r) \mathcal{D}_j \end{equation} As a transformation on input operator $\rho$ the operation $\mathcal{T}_j(r)$ acts as: \begin{equation} \mathcal{T}_j(r) : \rho \rightarrow r \rho + (1-r)\mathcal{D}_j(\rho) \end{equation} Note that this only gives a physical quantum operation when $r \in [0,1]$, in which case it represents dephasing noise. However, it is convenient for us to allow all non-negative $r$. When $1 \geq r > 0$ we will say that the operation is a noisy dephasing operation, but for more general $r$ we will refer to it as a `phasing' operation. The definition allows us to invert $\mathcal{T}_j(r)$ for $r > 0 $ with another phasing operation: \begin{equation*} (\mathcal{T}_j(r))^{-1} = \mathcal{T}_j(r^{-1}) \end{equation*} and express the product of two phasing operators as: \begin{equation} \label{composition} \mathcal{T}_j(rs) = \mathcal{T}_j(r)\mathcal{T}_j(s) \end{equation} \item {\bf Local `cylinder'.} For each particle $j$ we will consider the normalised dual of the measurements, which we call a `cylinder of radius 1', defined as follows: \begin{equation} \label{nomdualdef} {\mathcal{M}}^*_j(1) := \{ \rho | \mbox{tr}\{M \rho\} \geq 0 \forall M \in \mathcal{M}_j, \,\,\mbox{tr}{\rho}=1 \} \end{equation} A `cylinder' of arbitrary non-negative radius $r \geq 0$ will then be defined in terms of the action of $\mathcal{T}_j(r)$ on $\cyl{1}$: \begin{equation} \label{generalcyl} {\mathcal{M}}^*_j(r) := \mathcal{T}_j(r)({\mathcal{M}}^*_j(1)) \end{equation} There is one subtlety that needs to be addressed with regards to using these cylinders for classical simulation. In order to apply the brute force search over candidate separable decompositions in the gate simulation step of \cite{HN} the algorithm needs the state space to be described not as the dual of a set of measurements as we have done in equation (\ref{generalcyl}), but as the convex hull of a set of extremal points, in order to exploit Carath\'{e}odory's theorem. However, we can fix this using standard methods, as follows. Given a discretisation of the allowed measurements (determined by what precision the experimenter can set their measurement device to), each permitted measurement operator provides a bounding hyperplane of the dual, and from standard considerations \cite{Avis} the extrema are the intersections of $m$ of these planes, where $m$ is the (real) dimension of the dual space. For a set of $m$ planes whose intersection defines an extremal point, we can compute the extremal point to $l$ bits of accuracy by solving the relevant linear equations provided that the allowed measurements are described to $O(l)$ bits of precision (as a finite number of arithmetic operations are needed, and we are assuming that the cylinder is bounded). Hence there will at most be $O((\exp(O(l)))^m)$ extrema to $l \sim O(\log (poly(n)/\epsilon))$ bits precision, hence giving an overall additional cost of $O((poly(n)/\epsilon))^m) \sim O(poly(n)/\epsilon)$ per cylinder. As there are at most $n$ particles in the system, and hence at most $n$ cylinders, this remains polynomial. \end{enumerate} One of the consequences of the above assumptions is that when $r \in [0,1]$ (in which case the phasing to radius $r$ is a conventional noisy dephasing operation) the invariance of the permitted measurements $\mathcal{M}_j$ implies that for any $j$; \begin{equation*} \mathcal{T}_j(r) ({\mathcal{M}}^*_j(1)) \subseteq {\mathcal{M}}^*_j(1) \end{equation*} and hence for any $r_2 \leq r_1$: \begin{eqnarray*} {\mathcal{M}}^*_j(r_2) = \mathcal{T}_j(r_1) \mathcal{T}_j(r_2/r_1) ({\mathcal{M}}^*_j(1)) \\ \subseteq \mathcal{T}_j(r_1) ({\mathcal{M}}^*_j(1)) = {\mathcal{M}}^*_j(r_1) \end{eqnarray*} So cylinders of a given radius contain all cylinders of smaller radius, and in particular $ {\mathcal{M}}^*_j(0)$ is contained in all other cylinders at site $j$. We now make one further assumption about the gate $\mathcal{V}$: \begin{enumerate} \item[($\delta$)] We assume that if $\mathcal{V}$ acts on products of inputs from $\bigotimes_j {\mathcal{M}}^*_j(1)$ then there is a constant $1 \geq \mu > 0$ such that \begin{equation*} \left( \bigotimes_j \mathcal{T}_j(\mu) \right) \mathcal{V} \left(\bigotimes_j {\mathcal{M}}^*_j(1) \right) \in {\rm Sep}\left(\bigotimes_j {\mathcal{M}}^*_j(1)\right) \end{equation*} This assumption asserts that for a given $\mathcal{V}$ there is some amount of local dephasing noise acting at each site, other than the maximal dephasing $\mathcal{T}_j(0) = \mathcal{D}_j$ itself, which makes $\mathcal{V}$ into an operation that preserves unit cylinder separability of the inputs. Let $c$ be the maximum of all $\mu$ with this property. We can restate the assumption by acting on both sides of the equation with the inverse phasing operation to give: \begin{equation*} \mathcal{V} \left(\bigotimes_j {\mathcal{M}}^*_j(1) \right) \in {\rm Sep}\left(\bigotimes_j {\mathcal{M}}^*_j({1 \over c})\right) \end{equation*} \end{enumerate} The assumptions $(\alpha)-(\delta)$ have the consequence that $\mathcal{V}$ acting upon a collection of cylinders will lead to a separable output provided that the cylinder radii are grown by a factor $1/c$, as may be seen by the following equation: \begin{eqnarray*} \mathcal{V} \left(\bigotimes_j {\mathcal{M}}^*_j(r_j) \right) = \bigotimes_j \mathcal{T}_j(r_j) \mathcal{V} \left(\bigotimes_j {\mathcal{M}}^*_j(1) \right)\\ \subseteq \bigotimes_j \mathcal{T}_j(r_j) {\rm Sep}\left(\bigotimes_j {\mathcal{M}}^*_j({1 \over c})\right) = {\rm Sep}\left(\bigotimes_j {\mathcal{M}}^*_j({r_j \over c})\right) \end{eqnarray*} This means that for any system obeying the assumptions $(\alpha)-(\delta)$, \begin{equation} {1 \over c} \end{equation} will serve as the analogue of the growth factor $\lambda$ as used in the qubit case, and will allow for an analogous classical simulation algorithm to be formulated. \noindent {\bf Proof (Privileged Basis Systems satisfy the conditions):} We now show that PBS satisfy the conditions $(\alpha) - (\delta)$ . Apart from the requirement that the ${\mathcal{M}}^*_j({1})$ are bounded (which is part of condition ($\beta$)), conditions $(\alpha) - (\gamma)$ are immediately satisfied if we pick each group $G_j$ to consist of the unitaries on particle $j$ that are diagonal in the computational basis. So we need to show (i) that the ${\mathcal{M}}^*_j({1})$ are bounded in order to fully satisfy the $(\beta)$ condition, (ii) assumption $(\delta)$ is satisfied if we dephase using these groups. For simplicity in our discussion we assume that all the qudits have the same dimension $d$ and use ${\mathcal{M}}^*(1)$ to refer to the unit cylinder for any one particle and allowed measurements. The argument can easily be extended to situations in which the particles have different dimensions. We denote the qudit computational basis by $\ket{0},\ket{1},...,\ket{d-1}$. Let us first explain the boundedness. In fact we will explain that any $\rho \in \cyl{1}$ must be both Hermitian and bounded. The diagonal elements of $\rho$ must be bounded as they are valid probabilities for outcomes of a $Z$ measurement. So we need only consider off-diagonal elements. W.l.o.g. we consider the element $\bra{0}\rho \ket{1}$ and write it as $\bra{0}\rho \ket{1} = t \exp(i \omega)$. The argument easily extends to other off-diagonal elements. The approach we take is to express the off-diagonal elements in terms of probabilities of measurement outcomes, and this will allow us to show both boundedness and hermiticity. Using the fact that $\rho$ is of unit trace (its diagonal forms a probability distribution) it can be verified that: \begin{equation} \label{bounded} \bra{0}\rho \ket{1} + \bra{1}\rho \ket{0} = - 1 + {d \over 2^{d-2}} \sum \bra{v} \rho \ket{v} \end{equation} where the sum ranges over all vectors $\ket{v}$ of the form $(\ket{0}+\ket{1} \pm \ket{2} \pm \ket{3} \pm ... \pm \ket{d-1})/\sqrt{d}$. Similarly it can be verified that: \begin{equation} \label{bounded2} 2t = - 1 + {d \over 2^{d-2}} \sum \bra{\tilde{v}} \rho \ket{\tilde{v}} \end{equation} where the sum ranges over all vectors $\ket{\tilde{v}}$ of the form $(\exp(i \omega) \ket{0}+\ket{1} \pm \ket{2} \pm \ket{3} \pm ... \pm \ket{d-1})/\sqrt{d}$. As the vectors $\ket{v},\ket{\tilde{v}}$ are all unbiased w.r.t. to computational basis, the $\ket{v}\bra{v},\ket{\tilde{v}}\bra{\tilde{v}}$ give permitted equatorial measurement operators. Hence the rightmost sums of both equations (\ref{bounded}) and (\ref{bounded2}) are sums of probabilities, and so the right sides of (\ref{bounded}) and (\ref{bounded2}) are both real and bounded. Hence (\ref{bounded}) shows that $\rho$ is Hermitian, and (\ref{bounded2}) shows that it is bounded. Condition ($\beta$) is hence established. Let us now turn to condition ($\delta$). To explain the argument it will be helpful to partially characterise the extremal points of the normalised dual of the allowed measurements on any one of the particles. If an operator $\rho$ is in ${\mathcal{M}}^*(1)$, then its diagonal in the computational basis must be a probability distribution, as it must be in the dual of the $Z$ measurement. Now consider forming an operator $\rho'$ by replacing the diagonal of $\rho$ with another probability distribution. It is easy to see that $\rho'$ will also be in $\cyl{1}$, as it returns a valid probability distribution for $Z$ measurements, and for any of the equatorial measurements changing the probability distribution on the diagonal has no effect. This means that the extreme points of $\cyl{1}$ must have deterministic distributions on the diagonal consisting of one `1' and the rest of zeros, i.e. their diagonals must be one of $(1,0,0,...), (0,1,0,0,...)$, etc. Hence an extremal point of a single state space $\cyl{1}$ must be of the form: \begin{equation*} \ket{a}\bra{a} + \sum_{m \neq n} c_{m,n} \ket {m}\bra{n} \end{equation*} where $\ket{a}\bra{a}$ is a computational basis state. Now consider an $N$ qudit gate $\mathcal{V}$ that is formed from a unitary matrix $V$ that is diagonal in the computational basis. Consider acting this gate upon $N$ input qudits prepared in a product of such extrema, followed by independent noisy dephasing $\mathcal{T}_{\eta} = \eta \mathcal{I}+ (1-\eta)\mathcal{D}_G$ on each particle, where $\eta >0$ will be a small parameter whose value we will choose shortly. Our goal is to argue that if $\eta$ is small enough, the result of such an operation will be $\bigotimes^N_{j=1}\cyl{1}$-separable. A product of input extrema drawn from $\bigotimes^N_{j=1}\cyl{1}$ will be of the form: \begin{equation} \label{Vout} \ket{\underline{a}}\bra{\underline{a}} + \sum_{\underline{m}\neq \underline{n}} c_{\underline{m},\underline{n}} \ket {\underline{m}}\bra{\underline{n}} \end{equation} for some coefficients $c_{\underline{m},\underline{n}}$ where $\underline{a},\underline{m},\underline{n} \in \mathbb{Z}^N_d$. Note that there are other constraints on the $\underline{m},\underline{n}$ appearing in this sum, as when the first particle is (say) prepared in an extremum of the form {$\ket{0}\bra{0} +$ {\it off diag.}}, no term will appear in the sum of equation (\ref{Vout}) in which both $\underline{m},\underline{n}$ have the same value in the first position, unless that value is $0$. Additionally, as the sum must be Hermitian we have $c_{\underline{m},\underline{n}}=c^*_{\underline{n},\underline{m}}$. If we now act on equation (\ref{Vout}) with $V$, as $V$ is diagonal in the computational basis the form of the equation will not change. So the output of $V$ acting upon input extrema will again be of the form (\ref{Vout}), and moreover by the foregoing discussion it will be Hermitian and bounded. Now if we apply $\bigotimes^N_{j=1} \mathcal{T}_{\eta}$ to an operator of the form of equation (\ref{Vout}) off-diagonal terms will be multiplied by powers of $\eta$. Hence after the dephasing operation the state will be of the form: \begin{equation} \label{Tout} \ket{\underline{a}}\bra{\underline{a}} + \eta \sum_{\underline{m}\neq \underline{n}} c_{\underline{m},\underline{n}} \ket {\underline{m}}\bra{\underline{n}} \end{equation} where any further powers of $\eta$ have been absorbed into the coefficients $c_{\underline{m},\underline{n}}$. We will now argue that by making $\eta$ small we will force the state of (\ref{Tout}) to become generalised separable. Let the number of ways of picking {\bf ordered} pairs $(\underline{m},\underline{n})$ such that $\underline{m} \neq \underline{n}$ be $2W$, where $W$ is a positive integer (the number of ways must be even as we can interchange $\underline{m}$ and $\underline{n}$ for any suitable choice). Consider one specific way of picking $\underline{m} \neq \underline{n}$, say $\underline{m}= \underline{x},\underline{n} = \underline{y}$ and for the corresponding {\bf unordered} pair $\{\underline{x},\underline{y}\}$ define \begin{equation*} A_{\{\underline{x},\underline{y}\}} := \ket{\underline{a}}\bra{\underline{a}} + \eta W c_{\underline{x},\underline{y}} \ket{\underline{x}}\bra{\underline{y}}+\eta W c_{\underline{y},\underline{x}} \ket {\underline{y}}\bra{\underline{x}} \end{equation*} Then equation (\ref{Tout}) can be rewritten as: \begin{equation*} {1 \over W} \sum_{\substack{\{\underline{x},\underline{y}\}\\ \underline{x} \neq \underline{y}}} A_{\{\underline{x},\underline{y}\}} \end{equation*} where the sum is over unordered pairs $\{\underline{x},\underline{y}\}$ such that $\underline{x} \neq \underline{y}$. We will supply a separable decomposition for this expression (for sufficiently small $\eta$) by providing a separable decomposition for each $A_{\{\underline{x},\underline{y}\}}$. With this aim, let us consider one specific $A_{\{\underline{x},\underline{y}\}}$ and (to keep our equations less cluttered) rewrite it in the form \begin{equation} \label{Eform} A_{\{\underline{x},\underline{y}\}} = \ket{\underline{a}}\bra{\underline{a}} + W\bigotimes_j E_j + W\bigotimes_j E^{\dag}_j \end{equation} where the product is over the $N$ qudits and $E_j := (\eta c_{\underline{y},\underline{x}})^{1/N} \ket{x_j}\bra{y_j}$. The separable decomposition for equation (\ref{Eform}) will be made with a convex combination of product operators of the form: \begin{equation} \label{sepdec} \bigotimes_j \left( \ket{a_j}\bra{a_j}+ W^{1/N} e^{{2\pi i \over 8}v_j } E_j + W^{1/N} e^{-{2\pi i \over 8}v_j } E^{\dag}_j \right) \end{equation} with real $v_j$. There are two things that we need to demonstrate: that (i) an appropriate mixture of these products gives a decomposition of (\ref{Eform}), and (ii) the local operators in each product are contained within $\cyl{1}$. The second of these points is straightforward: consider a permitted measurement operator $M$ measured on one of the factors in equation (\ref{sepdec}), the probability will be of the form: \begin{equation*} \mbox{tr} \{ M \left(\ket{a_j}\bra{a_j}+ W^{1/N} e^{{2\pi i \over 8}v_j } E_j + W^{1/N} e^{-{2\pi i \over 8}v_j } E^{\dag}_j \right) \} \end{equation*} where (by the discussion following equation (\ref{Vout})) the $E_j,E_j^{\dag}$s are either off diagonal or proportional to $\ket{a_j}\bra{a_j}$. The value of this probability will be zero if $M$ is a $Z$ measurement operator not equal to $\ket{a_j}\bra{a_j}$. For all other permitted measurement operators - i.e. either $\ket{a_j}\bra{a_j}$ or the equatorial ones - the value of $\mbox{tr}\{M \ket{a_j}\bra{a_j}\}$ will be $\geq 1/d$. As this is strictly positive, by making $\eta$ small enough the potentially negative contribution from the $E_j,E_j^{\dag}$s will not make the overall probability negative. Hence for some $\eta > 0$ the local operators in equation (\ref{sepdec}) are guaranteed to be from $\cyl{1}$. Let us now turn to point (i), checking that we can decompose the $A_{\{\underline{x},\underline{y}\}}$ as a separable mixture. Consider picking $N-1$ integers $v_j$ for $j=1,..,N-1$ from $\mathbb{Z}_8 = \{0,1,2,..,7\}$ completely at random, and then setting an $N$th integer $v_N$ to be: \begin{equation} \label{veeN} v_N = - \sum_{i=1}^{N-1} v_i \end{equation} It is not difficult to verify (as we will shortly do) that the uniform mixture of (\ref{sepdec}) over all such choices of $v_j$ equals $A_{\{\underline{x},\underline{y}\}}$: \begin{eqnarray*} A_{\{\underline{x},\underline{y}\}} = \sum {1 \over 8^{N-1}} \\ \left(\bigotimes_j \left( \ket{a_j}\bra{a_j}+ W^{1/N} e^{{2\pi i \over 8}v_j } E_j + W^{1/N} e^{-{2\pi i \over 8}v_j } E^{\dag}_j \right) \right) \end{eqnarray*} This expression is hence our desired generalised separable decomposition for the $A$s, and hence supplies a generalised separable decomposition for equation (\ref{Tout}) provided that $\eta$ is small enough. For convenience we now explain how this identity arises. In order to match (\ref{Eform}), when we sum over the product operators in equation (\ref{sepdec}), we will need to eliminate `cross terms' such as: \begin{equation} E_1 \otimes \ket{a_2}\bra{a_2} \otimes E^{\dag}_3 \otimes .... \end{equation} which do not appear in (\ref{Eform}) because these `cross terms' contain basis states on some sites, $E_j$'s on other sites, and $E_j^{\dag}$s on yet others. In equation (\ref{Eform}) there are only the `non-cross' terms \begin{equation} \bigotimes_j \ket{a_j}\bra{a_j} \,\,\,\, , \,\,\,\, \bigotimes_j E_j \,\,\,\, , \,\,\,\, \bigotimes_j E^{\dag}_j \end{equation} We have picked our ensemble of $v_1,...v_8$ such that any `cross terms' cancel out, leaving only the `non-cross terms' that appear in equation (\ref{Eform}). We can see this by substituting for $v_N$ in our separable decomposition using expression (\ref{veeN}). We find that every cross term in the separable decomposition carries a non-trivial phase related to at least one of the $v_j$s, as follows: \begin{enumerate} \item Any cross term containing an $\ket{a_j}\bra{a_j}$ for $j \neq N$ will contain a phase $e^{\pm {2\pi i \over 8}v_j }$ due to the contribution from $v_N$. \item Any cross term containing $\ket{a_N}\bra{a_N}$ will contain a phase $e^{\pm {2\pi i \over 8}v_j }$ for some $j \neq N$. \item Any other cross terms (those containing no $\ket{a_j}\bra{a_j}$ at all, only $E_j$ or $E^{\dag}_j$ contributions, but at least one of each) will have an overall phase of the form: \begin{eqnarray*} \exp \left( {2\pi i \over 8}\left(\sum_{i=1}^{N-1} \pm v_i \pm \sum_{i=1}^{N-1} v_i \right) \right) \end{eqnarray*} such that the overall phase does not cancel, which means that for some $j \neq N$ there will be a phase contribution of \begin{eqnarray*} \exp \left( \pm {2\pi i \over 8} 2v_j \right) \end{eqnarray*} \end{enumerate} Altogether means that all the cross terms contain, for some value of $j$, either a non-trivial phase contribution of the form: \begin{eqnarray*} \exp \left( \pm {2\pi i \over 8} v_j \right) \end{eqnarray*} or one of the form \begin{eqnarray*} \exp \left( \pm {2\pi i \over 8} 2v_j \right) \end{eqnarray*} We can exploit this to eliminate the cross terms. In choosing the $v_j$ in the way that we have, we note when summing the phases over them we get: \begin{eqnarray*} \sum_{v_j} \exp \left( \pm {2\pi i \over 8} v_j \right) = \sum_{v_j} \exp \left( \pm {2\pi i \over 8} 2v_j \right) = 0 \end{eqnarray*} Hence the cross terms cancel to leave exactly the right side of (\ref{Eform}), as desired. All of this means that we have the following result: \noindent {\bf Theorem 3:} Consider {\it privileged basis systems}, i.e. suppose that we have a computational basis (the `$Z$ basis'), a set of permitted destructive measurements $\mathcal{M}$ consisting of measurements in the computational basis (`$Z$ measurements') and measurements consisting of all unbiased rank-1 projectors (`equatorial measurements'), and suppose moreover that each qudit undergoes at most $D$ gates, which all are diagonal in the computational basis. Then there is a $1 \geq c > 0$ such that for qudits initialised from the set \begin{equation} \cyl{c^D} = (c^D \mathcal{I} + (1-c^D) \mathcal{D}_G)\cyl{1} \end{equation} adaptive permitted measurements can be efficiently sampled classically, and moreover has a local hidden variable model. In this equation $\cyl{1}$ represents the normalised dual of the permitted measurements, and $\mathcal{D}_G$ represents the total dephasing operation. Note that although we have presented the argument with the same dimension qudit at each site and the same gate at each edge, this is not necessary for the argument. $\blacksquare$ We note that the cylinder separable states in this more general setting can include pure multiparticle quantum entangled states. To see this, first note that $\cyl{r}$ will contain pure quantum states that are superpositions in the computational basis with one dominant element, such as: \begin{equation*} \ket{\psi} = \sqrt{1- (d-1)\epsilon^2} \ket{0} + \epsilon \sum_{j=1}^{d-1} \ket{j} \end{equation*} because if $\epsilon$ is small enough then $\mathcal{T}_{1/r} (\ket{\psi}\bra{\psi})$ will be positive for the permitted measurements for similar reasons as discussed previously - the dominant contribution from $\ket{0}\bra{0}$ will outweigh any negative contribution from any off diagonal terms. It not difficult to then construct multi-particle unitaries that are diagonal in the computational basis that, acting upon such inputs, that will lead to pure entangled quantum states just as in the case of the $CZ$ gate. \section{Coarse Graining} \label{SectionCoarse} In our approach to classical simulation thus far, we faced two conflicting requirements. In order to maintain a separable decomposition, the radii must grow with each application of a $CZ$ gate. However, we have a limit to how far the radii can be grown, because they must satisfy the dual constraint, i.e. the radii must not exceed $1$ in order to not leave the dual of the permitted measurements. In this section we will see that we can increase the range of systems that can be efficiently simulated classically by managing this tradeoff better through a coarse grained approach. It is helpful to consider the classical simulation approach we have taken more generally. Consider a set of particles undergoing a quantum circuit, and imagine that particle $i$ is initialised in a quantum state drawn from a set of operators $S_i$. Each time a gate acts, we attempt to update the state spaces to maintain a separable decomposition (this will usually involve `growing' the state spaces in some way, just as we grew the radii in the previous sections). Eventually, at the end of the circuit, we hope that the final state spaces are small enough to be inside the dual of whatever measurements we are permitting. If all steps of the scheme can be accomplished, then it could be a route to an efficient classical simulation provided that the technical requirements of \cite{HN} are also met. Of course, given that quantum separability is a hard problem, one might expect that it will usually be too difficult to pursue this approach. However, we have a few advantages in our favour: firstly, we may try to pick state spaces for which showing separability is easy, secondly, by considering finite degree cluster-like computations each particle only undergoes a constant number of quantum-entangling gates, thirdly, we may exploit the structure of the interaction graph. In this section we will see how the technique of coarse graining may help us exploit these advantages to improve the bounds presented above. While the argument can be applied to any lattice for which a regular tiling of increasing size is possible, for simplicity of explanation we will consider a 2-dimensional lattice of qubits of size $N \times M$. We begin by forming the qubits into identical rectangular blocks of qubits that we will treat as single particles (we'll assume that $N$ and $M$ are chosen such that this is possible, e.g. they aren't prime). We partition the $CZ$ gates into two types: the ones acting internally within each block, and the ones acting externally that connect different blocks. We call these gates ``internal'' and ``external'' $CZ$s respectively. Now imagine that we have initialised the qubits and are about to embark upon performing the $CZ$ gates. We will analyse situations like this in a few steps, exploiting the fact that the $CZ$ gates commute, and therefore can be implemented in any order. \begin{enumerate} \item We pick a starting state space for each block that is simply the product of individual cylinders ${\rm Cyl}(r)$ on each qubit. In a $8 \times 8$ lattice, for instance, we would have the following layout of qubits, where the dots represent our initial ${\rm Cyl}(r)$ state spaces: \begin{center} \begin{tabular}{|cccccccc|} \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline \end{tabular} \end{center} \item We now partition the qubits into blocks of fixed size. For instance, we may partition the $8 \times 8$ lattice we are considering into four $4 \times 4$ blocks: \begin{center} \begin{tabular}{|cccc|cccc|} \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline \end{tabular} \end{center} \item We then apply the external $CZ$ gates that connect qubits between different blocks. Using the approach of earlier sections, to maintain a separable decomposition, we allow each individual qubit radius to grow according to the number of external $CZ$ gates applied to that qubit. On a given block $b$ containing $n$ qubits, we tentatively define the block state space as the product $S'_b(r) := \bigotimes_{i=1,..,n} {\rm Cyl}(r_i)$, where for qubit $i$ in the block, $r_i=r \lambda^{e_i}$, where $e_i$ denotes the number of external $CZ$s the qubit undergoes. This means that the block state spaces are parameterised by the single parameter $r$. By construction, the resulting state (without yet having applied the internal $CZ$s) is separable with respect to these state spaces. Although there may be more complicated state spaces that give better eventual classical algorithms, by following the path we have taken we avoid the need for a potentially difficult separability analysis. So, in our $8 \times 8$ example, for instance, we will now have new cylindrical state spaces as follows \begin{center} \begin{tabular}{|cccc|cccc|} \hline $\cdot$ & $\cdot$ & $\cdot$ & $\circ$ & $\circ$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\circ$ & $\circ$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\circ$ & $\circ$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\circ$ & $\circ$ & $\circ$ & $\bigcirc$ & $\bigcirc$ & $\circ$ & $\circ$ & $\circ$ \\ \hline $\circ$ & $\circ$ & $\circ$ & $\bigcirc$ & $\bigcirc$ & $\circ$ & $\circ$ & $\circ$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\circ$ & $\circ$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\circ$ & $\circ$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\circ$ & $\circ$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline \end{tabular} \end{center} In this diagram the dots represent ${\rm Cyl}(r)$, the smaller circles represent ${\rm Cyl}(\lambda r)$, and the biggest circles represents ${\rm Cyl}(\lambda^2 r)$. \item We then obtain new state spaces $S''_b(r)$ by applying the internal $CZ$s to the $S'_b(r)$ state spaces we had in the previous step. This leads to a constraint on $r$, as for the classical simulation we require the state spaces $S''_b(r)$ to be contained within the dual of the permitted cluster circuit measurements. Let $r_{b,max}$ be the maximum value of $r$ such that $S''_b(r)$ is contained within the dual, and define our final state spaces as $S_b := S''_b(r_{b,max})$. For inputs with radius less than $r_{b,max}$, this furnishes a separable decomposition in terms of the block state spaces $S_b$, which can then be used to provide a classical efficient simulation. \end{enumerate} This coarse graining process can only increase the range of systems that we can efficiently simulate classically. To see why, let us compare the constraints we will have on $r$ from this coarse grained approach, to those obtained in the `fine grained' approach of earlier sections. In fact the only real difference between the approaches occurs in the last step. In the fine grained analysis, $r$ had to be picked so that when the internal $CZ$s are applied, not only was the block state space in the dual of the measurements, but the output state {\it also} had to be separable with respect to an internal partition into ${\rm Cyl}(1)$ spaces. This is a stronger requirement than simply requiring the internal $CZ$s to keep the block state space in the dual, and therefore leads to the possibility that coarse graining could allow us to simulate larger values of $r$. We will see that this is indeed the case, and in fact we will see that coarse graining into larger and larger blocks can only increase the values of $r$ that can be efficiently simulated classically. \subsection{Blocks of 2 cylinders} To illustrate the above approach, let us first consider a 2-dimensional square lattice, and suppose that the length on one side is even. Let us consider a partioning as follows into blocks of two particles, as illustrated here: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ \\ \hline \end{tabular} \end{center} We will call any such block of two qubits a `2block'. Each qubit in a block away from the perimeter of the entire grid undergoes three external $CZ$s, and so the starting $r$ will be taken to \begin{equation} r' := r \lambda^3 \end{equation} for all particles not on the perimeter (the qubits on the perimeter would only grow by $\lambda$ or $\lambda^2$, but we won't consider them as they will lead to a weaker constraint on $r$). Previously, in the fine grained analysis, we would at this step apply the remaining $CZ$ (the internal one) between the qubits inside a given block, and that would lead to a constraint that \begin{equation} r' \leq {1 \over \lambda} \Rightarrow r \leq {1 \over \lambda^4}. \end{equation} However, we now instead only need to make sure that we do not get taken out of the dual. We will shortly show that $r_{2block,max}=1/2$, and so provided that \begin{equation} r' \leq {1 \over 2} \Rightarrow r \leq {1 \over 2\lambda^3} \end{equation} applying the internal $CZ$ does not take us out of the dual of the measurements we are permitted, and we have a useable separable decomposition over blocks of 2 particles. Hence by considering blocks of two qubits we obtain an increase, albeit slight, in the size of the region that can be efficiently simulated classically. To see where $r_{2block,max} = 1/2$ comes from, we must consider all extremal inputs to the internal $CZ$ of the form: \begin{equation} \label{input} [1,r' \cos(\theta_A), r' \sin(\theta_A), \pm 1] \otimes [1,r' \cos(\theta_B), r' \sin(\theta_B), \pm 1] \end{equation} and compute the maximum value of $r'$ such that for any inputs of this form and any allowed measurements, we obtain non-negative probabilities after the $CZ$ acts. We first show that we can make a number of simplifications that reduce the inputs that we need to consider. These simplifications will be also needed in further coarse graining later: \begin{enumerate} \item {\bf W.l.o.g. we need only consider measurements in the $XY$ plane}. If either of the measurements on the two particles is a $Z$ measurement, no negativity can arise because if we (by cyclicity of the trace) act the $CZ$ on the measurement operators we simply rotate the non-$Z$ measurement operator by $Z^0 = I$ or $Z^1=Z$. This means that the measurement could be replaced by a product measurement on products of cylinders, and no negativity could arise. \item {\bf W.l.o.g. we need only consider inputs with $z=1$}. Suppose that we start with inputs that have $z=-1$ on a given cylinder. This can be described as a $z=+1$ input with an $X$ applied to it. However, pulling the $X$ through the $CZ$ (by a standard stabilizer calculation) to apply it to the measurement operators, gives an $X$ on the measurement for that cylinder, and $Z$s on measurements on the neighbours. These are just new $XY$ plane measurements, so a $z=-1$ can be transformed to a $z=+1$ simply by changing the $XY$ plane measurements being considered. \item {\bf W.l.o.g. we may restrict to measurements projectors:} \[{I - X \over 2} \] The reason for this is by the $Z$ symmetry of the cylinders and the $CZ$ gates, we may simply apply a $z$-axis rotation to any $XY$ plane projector to it into the projector $(I-X)/2$. \end{enumerate} Hence we now need to compute the maximal value of $r$ such that extremal cylinder inputs with that radius and $z=+1$ do not give negative overlap with projectors \[{I - X \over 2} \otimes {I - X \over 2} \] The probability of getting this outcome on inputs satisfying the restrictions can easily be computed to be (up to an unimportant positive factor): \begin{eqnarray} 1 - r' \cos(\theta_A) - r' \cos(\theta_B) + r'^2 \sin(\theta_A) \sin(\theta_B) = \nonumber \\ (1-r' \cos(\theta_A))(1-r' \cos(\theta_B)) - r'^2 \cos(\theta_A-\theta_B) \label{2nd} \end{eqnarray} The expression gives $1-2r'$ when $\theta_A=\theta_B=0$, so we need $r' \leq 1/2$. However, for any $0 \leq r' \leq 1/2$, this is the minimal possible value, as equation (\ref{2nd}) is no smaller than $(1-r')(1-r' ) - r'^2 = 1-2r'$. Hence the probability is positive for all measurements and all inputs iff $r' \leq 1/2 = r_{2block,max}$. This example shows that coarse graining certainly helps to increase the size of the classical region, but as we now discuss one can do better by increasing the size of the blocks. \subsection{Larger Blocks} We will define two sequences of optimisation problems which bound each other, and are obtained by considering increasing block sizes. The upper sequence is non-increasing and the lower sequence is non-decreasing. The limit of the lower sequence gives the radius of inputs that can be efficiently simulated using the coarse graining approach described above. In the case of a 2D lattice the two sequences converge to limits that are quite close, but we do not yet know whether the limits are the same for both sequences. The basic principles can apply to other lattices that can be split into tiles in an appropriate way. We begin by describing the two sequences. On any given rectangular block $B$ with $H \times W$ qubits (where $H,W \geq 2$) embedded in a larger lattice, we consider two ways of initialising the qubits: \begin{itemize} \item[i)] All cylinders are prepared in arbitrary extremal cylinder states with radius $r$ and $z=+1$, \,\,\, or \item[ii)] All cylinders on interior qubits are prepared in arbitrary extremal cylinder states with radius $r$ and $z=+1$, but qubits in the boundary of the rectangle are prepared in extremal cylinder states with radius grown according to the number of external $CZ$s. So corner particles are prepared in extremal cylinder states with radius $\lambda^2 r$ and $z=+1$, and all other boundary qubits prepared in extremal cylinder states with radius $\lambda r$ and $z=+1$. \end{itemize} Consider applying internal $CZ$s to the block, and denote the resulting operators describing the whole $H \times W$ block by $\rho(B,r)$ and $\rho_{\lambda}(B,r)$ respectively. We will be interested in when these operators are in the dual of the permitted measurements. If an operator $\rho$ is in the dual of the set of permitted measurements $\mathcal{M}$ we will write $\rho \geq_{\mathcal{M}}0$ (reflecting the fact that the operator is `positive' with respect to the allowed measurements). We define the following quantities: \begin{eqnarray} \label{optimum} s(B) &:=& \max \{ r | \rho(B,r) \geq_{\mathcal{M}} 0 \} \\ s_{\lambda}(B) &:=& \max \{ r | \rho_{\lambda}(B,r) \geq_{\mathcal{M}} 0 \} \end{eqnarray} We are interested in the value of $s_{\lambda}(B)$ for increasing block sizes. For a given value of $r$, if there is a block $B$ such that $r \leq s_{\lambda}(B)$, then inputs from cylinders of radii $r$ can be efficiently simulated classically. On the other hand, if there is a block size $B$ such that $r > s(B)$, then inputs from cylinders of radii $r$ lead to negative probabilities, and so there cannot be a separable decomposition (which by construction leads to positive probabilities), and moreover the sampling problem would not be well defined in the first place for these values of $r$. So, we are interested in finding blocks such that $s_{\lambda}(B)$ is large. The following lemma helps somewhat with this task. {\bf Lemma 4:} Consider a region $KL$ of qubits embedded in a larger lattice. Consider cutting the region into two disjoint subregions $K$ and $L$ (i.e. we remove the $CZ$ gates joining these two regions). Then we have the following relationships. For any region $F$ whatsoever: \begin{equation} \label{upperseq} s(F) \geq s_{\lambda}(F) \end{equation} For the region $KL$ and its subregions $K$ and $L$: \begin{eqnarray} s(KL) &\leq& \min \{s(K),s(L) \} \label{decrease} \\ s_{\lambda}(KL) &\geq& \min\{ s_{\lambda}(K),s_{\lambda}(L)\} \label{increase} \end{eqnarray} In particular suppose that the two subregions $K$ and $L$ are identical (i.e. correspond to isomorphic graphs), then in that case we would have (writing $K=L$ to denote that the subregions are isomorphic): \begin{eqnarray} s(LL) &\leq& s(L) \\ s_{\lambda}(LL) &\geq& s_{\lambda}(L) \end{eqnarray} Informally this tells us that as we increase the size of a region by repeatedly joining subregions together, then $s$ can only decrease, whereas $s_{\lambda}$ can only increase, in spite of the fact that they are are defined via similar looking optimisation problems. \noindent {\bf Proof:} To see equation (\ref{upperseq}) note that $\rho(F,r)$ can be obtained from $\rho_{\lambda}(F,r)$ by dephasing the externally connected qubits of $F$. As dephasing maintains $\geq_{\mathcal{M}} 0$ positivity, this means that $ s(F)\geq s_{\lambda}(F)$. To see equation (\ref{decrease}) we observe that cutting the $CZ$ gates joining regions $K,L$ does not change the marginal operators on $K$ or $L$, i.e. $\mbox{tr}_K\{ \rho(KL,r)\} = \rho(L,r) $ (and as the labelling of the region is not important, this is true if we interchange $K$ and $L$ too). This may be seen by a simple computation: a cylinder extremum with $z=+1$ can be written in the form $\ket{0}\bra{0}+ a\ket{0}\bra{1} + b\ket{1}\bra{0}$ for some $a,b \in \mathbb{C}$. Consider for instance interacting this with two other particles in an arbitrary state $T$, using two $CZ$ gates. They become: \begin{equation*} \ket{0}\bra{0} \otimes T + a\ket{0}\bra{1} \otimes (T Z^{\otimes 2}) + b\ket{1}\bra{0} \otimes (Z^{\otimes 2} T) \end{equation*} Tracing out the first particle leaves the remaining particles in their original marginal state $T$. One can see that this would be the case irrespective of the number of $CZ$ gates applied. Hence the marginal state of a given region does not change when external $CZ$s are applied. Now, if $\rho(KL,r)\geq_{\mathcal{M}}0$ then we also have $\mbox{tr}_K \{\rho(KL,r)\}\geq_{\mathcal{M}}0$, but as $\mbox{tr}_K\{\rho(KL,r)\}=\rho(L,r)$ this means that $\rho(KL,r)\geq_{\mathcal{M}}0$ implies both $\rho(K,r)\geq_{\mathcal{M}} 0$ and $\rho(L,r)\geq_{\mathcal{M}}0$, hence we have equation (\ref{decrease}). To see equation (\ref{increase}), we note that $\rho_{\lambda}(KL,r)$ is in the convex hull of products $\rho_{\lambda}(K,r) \otimes \rho_{\lambda}(L,r)$ by using the stochastic representation of the $CZ$s that join $K$ and $L$. Hence $\rho_{\lambda}(KL,r)$ on the block $KL$ must be $\geq_{\mathcal{M}} 0$ if $\rho_{\lambda}(K,r)\geq_{\mathcal{M}}0$ and $\rho_{\lambda}(L,r)\geq_{\mathcal{M}}0$, and so equation (\ref{increase}) must hold. $\blacksquare$ We remark that Lemma 4 also applies to privileged basis architectures, the only argument that needs to be adjusted is the marginal argument, see footnote \footnote{All the arguments as presented for the qubit case go through unchanged for privileged basis architectures, except for the argument about marginals, which although very similar may require slight clarification. In privileged basis architectures the cylinder on a single particle will have extremal operators of the form $\sigma = |a\rangle \langle a|+ \Delta$ where $\Delta$ represents off-diagonal terms. Consider a controlled-diagonal unitary of the form $U = \sum |i\rangle \langle i| \otimes Z_i$, where $Z_i$ represents a diagonal single particle gate contingent on the the value of $i$. We note that $U \sigma \otimes \sigma' U^{\dag} = \sum_{a,b} \bra{a} \sigma \ket{b} \ket{a} \bra{b} \otimes Z_a \sigma' Z^{\dag}_b $. Tracing out the first particle leaves the second particle in $Z_a \sigma' Z^{\dag}_a$ for one specific value of $a$ when $\sigma$ is an extremal point, hence the marginal state on the second particle is at most transformed by a diagonal single particle unitary. So we see that when extremal points are considered, $U$ interactions and then tracing out at most rotates the other qubits by diagonal unitaries. As a diagonal unitary on a given particle does not change its positivity with respect to the allowed measurements, we have that the positivity of a state implies the positivity of any marginal, so the analogue of $u_n$ for these systems will also be non-increasing.} for an explanation. Lemma 4 allows us to define sequences that help to capture when $r$ is classically simulatable efficiently through the coarse graining approach. Consider for instance constructing a sequence of blocks by starting with a single $2 \times 2$ block $B_1$ and then recursively constructing larger blocks by joining two copies of $B_{n-1}$ to make $B_n$. Define sequences \begin{eqnarray} u_n &:=& s(B_n) \\ l_n &:=& s_{\lambda}(B_n) \end{eqnarray} From Lemma 4 we have that $u_n \geq l_n$, $l_n$ is non-decreasing, and $u_n$ is non-increasing, and hence both sequences converge. Let us denote the limits as: \begin{eqnarray} u &:=& \lim u_n \\ l &:=& \lim l_n \end{eqnarray} A radius $r$ is classically simulatable efficiently if $r < l$ but if $r > u$ then it leads to negative probabilities, and so in the latter situation the problem of classically sampling from the output of the `cylindrical computer' is not well defined. It is natural to speculate that $u=l$. This would have an interesting foundational interpretation: if we had a hypothetical cylindrical computer made from cylindrical bits placed on the vertices of the lattice and undergoing $CZ$ interactions with their neighbours, then for $r < l$ the system would be efficiently simulatable classically, and for $r>l$ the system would not give valid probabilities. A similar interpretation would hold for any other privileged basis architecture and lattice if they have $u=l$. While we have not been able to establish whether or not $u=l$ for any system, for a square 2D lattice with $CZ$ interactions we have numerically computed lower bounds on $l_1$ (using a polyhedral outer approximation of the input cylinders and doing a brute force search), and upper bounds to $u$ using trial measurements and inputs on rectangles of size $6 \times 7$. These numbers indicate that for a 2D square lattice $0.0698 \leq l \leq u \leq 0.139$ (but tentative further investigations in fact suggest that $0.0913 \leq l \leq u \leq 0.128$). Hence even if $u \neq l$, they are not far apart (see figure \ref{bullseye}). These numerical investigations can certainly be taken further. We leave this for another occasion. However, we will report one initial finding: for the square 2D lattice, numerical experiments seem to suggest the following conjecture: both the upper sequence $u_n$ and the lower sequence $l_n$ are determined by considering measurement projectors on each particle of the form $(I-X)/2$, and input extrema of the form $(I+ \alpha X +Z)/2$ (i.e. with no $Y$ component) where $\alpha$ includes the contribution from $r$ and any growth factors $\lambda$ applied in the coarse graining process. The maximum $r$ for which these inputs and measurements give positive probability appears to be the maximum in equations (\ref{optimum}). \begin{figure} \caption{We define `cylindrical computers' to be hypothetical devices made by placing `cylindrical bit' operators $(I+r\cos{\theta}X+r\sin(\theta)Y \pm Z)/2$, taken from the top or bottom of a cylinder of radius $r$, at the vertices of a lattice, interacting neighbours with $CZ$ gates, and then measuring in $Z$ or $XY$ plane measurements. The diagram represents the input operators (of either $z$ value) as points $(r,\theta)$ in polar coordinates. In the case of a square 2D lattice, we know that inputs from the dark grey central region $r < 0.0698$ can be efficiently simulated classically by the coarse graining approach, whereas inputs from the outer lighter grey region $r > 0.139$ lead to negative probabilities (given a large enough lattice). We are certain that these bounds are not tight. The narrow white band represents the currently uncertain region - computing more terms in the sequences $u_n$ and $l_n$ will make this region narrower. If the limits $u$ and $l$ meet, then the white band would shrink to a circle at radius $l$. Diagrams of a similar nature can in principle (although the computations may be difficult) be produced for any privileged basis architecture with inputs from its cylinders.} \label{bullseye} \end{figure} \section{Obstacles to classical simulation} \label{SectionObstacles} In the foregoing discussion we discussed our classical simulation methods in the context of input state spaces that are cylinders or coarse grained versions of them, and hence contain many non-physical operators. However, as we are ultimately only interested in simulating systems with {\it quantum} inputs, we may wonder whether it is possible to change our state spaces (e.g. perhaps shaving off some extremal points) to obtain a greater range of quantum inputs that can be efficiently simulated classically. Or more generally, one might wonder whether there is some other classical simulation algorithm that can simulate more quantum inputs (e.g. with a higher radius, but contained within the Bloch sphere). In this section we discuss obstacles that such endeavours may face. The main ideas are already present in other works \cite{Terry,mora_universal_2010}, we merely modify them slightly to fit our setting in which we have stronger restrictions on the available measurements, as we are not permitted to remeasure qubits. The key observation is that if we start with input qubits with high enough radius, then we can use the cluster circuits to steer unmeasured qubits to a $\ket{+}$ state conditioned on the measurement outcomes of permitted measurements \footnote{We thank Miguel Navascues and Richard Jozsa for suggesting this line of investigation to us.}. This means that one could, given qubits of a high enough radius on an appropriate lattice, probabilistically prepare in ideal cluster state on the unmeasured qubits. This could in turn give obstacles to classical simulation algorithms. We consider two ways: \begin{enumerate} \item Following arguments similar to those utilised in \cite{Terry,mora_universal_2010}, if the probability of success for creating a $\ket{+}$ on unmeasured qubits exceeds a threshold determined by lattice percolation thresholds on the unmeasured lattice, one can implement cluster state quantum computation. This means that for some graphs of finite degree, there is a radius beyond which BQP can be supported and classically efficient simulation is unlikely. \item One could rule out the existence of a separable decomposition under any coarse graining scheme using the fact that one can violate a Bell inequality using the permitted measurements. As separable decompositions automatically furnish a local hidden variable model \cite{Werner}, non-locality would rule out a separable decomposition based classical algorithm, even if not ruling out other classical algorithms. \end{enumerate} In appendix \ref{appendix} we present an example of the first of these approaches: on a lattice of degree $5$ (see figure \ref{fig:lattice}) with input {\it quantum} pure states drawn from within ${\rm Cyl}(r_{max})$ with $r_{max}=0.84$ one can create a perfect 2D cluster state efficiently on one subset of the qubits by measuring the other qubits. Hence it should not be possible to classically efficiently simulate quantum pure states with $r \geq 0.84$ on such a lattice. The nonlocality obstructions are essentially questions of localisable `non-locality', in a similar sense to the definition of localisable entanglement \cite{localisable}. Imagine that we are attempting to find a separable decomposition, with any state space or coarse graining method, such that two adjacent regions of qubit are in different blocks, across which we would like a separable decomposition. As the regions are adjacent there will be one qubit in one block connected to another qubit in another block by a $CZ$ gate. Pick two such qubits and call them $A$ and $B$. Mark out two chains of qubits, one from the first block terminating at $A$, and the other from the second block terminating at $B$. If we measure out all qubits in the $Z$ basis, except for ones on the two chains, assuming that the initial $r$ was high enough, one can use a similar protocol to appendix \ref{appendix} to create $\ket{+}$ states in $A$ and $B$. This will result in an EPR pair that can then violate a Bell inequality with our permitted measurements, and so no separable decomposition can be used as soon as the input radii are high enough for this purification to be possible. This means that when the initial radius $r$ is too high, there can be no suitable generalised separable decomposition, even with a different choice of state space. For small values of $r$ a similar process would localise {\it quantum} entanglement between the two qubits, but not non-locality for our restricted measurements. Further obstacles to increasing the set of inputs for which we can classically simulate might be obtainable from conjectures about the polynomial hierarchy. In these arguments \cite{harrow_quantum_2017}, one proves that if widely-believed complexity theoretic conjectures hold (i.e. the non-collapse of the polynomial hierarchy) then there cannot be any efficient classical simulation algorithm. For example, one can entertain the possibility of a multiplicative error simulation. To prove that our restricted model on a 2D lattice cannot be simulated, we could consider attaching linear chains of ancilla qubits to the qubits on the lattice. We would then want to show that by measuring the ancillas, and allowing for post-selection, we can prepare a state on the lattice that is universal resource state for post-selected MBQC. Then by following the arguments used in \cite{bremner_classical_2010}, this suffices to show that the restricted 2D cluster state cannot be simulated up to multiplicative error. However, this notion of simulation is physically unrealistic, and we would like to rule out a classical simulation up to additive error. Stemming from work by \cite{bremner_average-case_2016,aaronson_complexity-theoretic_2016}, there has been further progress in ruling out additive error simulations for various restricted models of quantum computing (\cite{bremner_achieving_2017,miller_quantum_2017,gao_quantum_2017,yoganathan_quantum_2019,haferkamp_closing_2020}). For now however, we leave this for future work. Nevertheless, we note here that when post-selection is permitted, allowing all measurements (not just $Z$ or $XY$ plane measurements) brings additional power. Consider two qubits with a low radius $r$ undergoing a $CZ$ gate. The existence of a cylinder separable decomposition for the output shows that if we then post-select on on the outcomes of $Z$ or $XY$ plane measurements on one qubit, we cannot steer the unmeasured qubit to a perfect $\ket{+}$ (as the other qubit must be taken to a state from inside a cylinder of radius $\lambda r$). However, if we are permitted to measure the first qubit arbitrarily, then (by standard considerations \cite{HJW}) with postselection one can obtain a perfect $\ket{+}$ on the second qubit. \section{Discussion} \label{SectionDiscussion} We have shown that computations made from cluster state circuits acting upon inputs close enough to computational basis states can be efficiently simulated classically. We obtain explicit bounds in the case of the qubit systems, but the framework applies to qudits and other types of (diagonal) interaction as well. Our classical simulations also lead to types of local hidden variable model, the second of which is non-standard, as the hidden variable model can communicate within blocks. The inital classical simulations furnish examples of highly entangled quantum systems that have a local hidden variable model through the cylinder separable decomposition. The second coarse graining simulations lead to a kind of local hidden variable model in which the locality constraint is relaxed for particles within the same block. Let us offer some remarks on how the approach given in this work could probably be applied in other situations. Key to our construction has been the idea that to maintain a separable decomposition one can grow the local state space to include non-physical operators. However, this is completely general: given any gate (diagonal or not) one can maintain a separable decomposition by `growing' the state space. That this is true is actually just a different take on standard ideas in entanglement theory. For instance consider the fact an entangled state can be turned into a quantum-separable one by local noise acting upon each particle. This means that acting upon the quantum-separable state with the inverse of the noise will give us a separable decomposition for the original entangled state, albeit one involving state spaces that have `grown' larger than physical quantum ones. The problem with non-physical separable decompositions like this is that they cannot be sampled from as they lead to negative probabilities. However, if we are only interested in certain measurements, we could hope that any negativities that arise will be controlled well enough to not be `seen' by our permitted measurements. This is exactly what we have done in this work through the use of cylindrical state spaces. The fact that this can lead to classically efficient simulations of non-trivial pure quantum-entangled systems is perhaps surprising, but it suggests that in other situations, perhaps other low-degree circuits with more general (non-diagonal) gates, the approach could be more effective than might be anticipated. An important consideration in any such investigations would be how quickly the state spaces must grow. In the case of two particle states, the construction of `small' state spaces that provide a separable decomposition with minimal `negativity' has been considered in \cite{AJRV2}, and connections to cross norm entanglement measures \cite{Oliver} provide a useful technical tool for attempting to minimise state space growth. The approach is closely related to the general theme of using quasi-probability distributions for classical simulation methods and local hidden variable models. It is hence reasonable to expect that recent works that have explored simulations involving small amounts of `magic' or `negativity' (see e.g. \cite{seddon2021quantifying,Pashayan2015WB}) could also be combined with the notion of generalised-separability to simulate systems with small amounts of generalised entanglement. In any given system it is possible that a different choice of state spaces could lead to classical simulation algorithms for other quantum inputs. For example, in the context of the cluster state variants that we have considered in this work, we do not directly care about simulating systems with non-physical cylindrical inputs, we only care about quantum inputs. So we might consider other state spaces of different shapes, with the aim of finding ones that grow slowest when undergoing interactions, but contain as many quantum input states as possible. To what extent might this be possible? A convex hull version of twirling \cite{Werner} can be used to argue that an optimal state space must respect the symmetry group, and as pointed out earlier there are strong connections to certain entanglement measures \cite{AJRV2}. However in future work it may be useful to develop better systematic methods of constructing good state spaces. Another possible question is whether coarse graining can increase the class of permitted measurements that can be efficiently classically simulated. In this work we only used coarse graining to increase the set of initial inputs that could be simulated. However, it is possible that coarse graining could instead increase the set of simulatable measurements. For example, if it were possible to write down a non-trivial family of entangled states that are separable with respect to block state spaces consisting of entanglement witnesses \cite{witness}, then these states would be good candidates for entangled systems that can be efficiently simulated classically for {\it any} single particle measurements. This is because the set of entanglement witnesses can be considered to be non-quantum states spaces that are (by definition) in the dual of local measurements. We do not know if such examples exist. Another viewpoint of the work is through its connections to the foundations of quantum theory. We can view our investigations as an exploration of the complexity of a kind of toy non-physical theory in which extremal cylinder operators (which are not quantum states) are placed at the nodes of a lattice, interacted with diagonal gates, and measured in a computational basis or in bases unbiased to it. In its present form this `theory' leads to negative probabilities for lattices of high enough degree, and so does not immediately make `operational sense' as a physical theory. However, models of computation incorporating quasi-probability considerations have been considered in the work of \cite{Lee}, and we currently believe that the cylindrical computers that we consider here could be considered operational `non-free' theories according to the definition of those authors. While our focus has been on using cylindrical computers with $r < l$ for constructing classical simulation algorithms for quantum systems, the framework of \cite{Lee} could shed light on the case when $r > u$. We also note that the systems we consider could lead to an interesting dynamical theory from a field theory perspective, where for a regular square lattice one direction could represent time (as is a standard interpretation in cluster state computation). Another open question is whether the limits $u$ and $l$ are identical for some lattices that are amenable to coarse graining. If it turns out to be the case that $u=l$ for such a system, then that would mean that apart from $r=l$ we would know the computational power almost entirely - for $r>l$ the system gives negative probabilities (and so the sampling problem is not well defined), but for $r<l$ the system is classically tractable. In the case of the 2D square lattice of qubits with $CZ$ interactions, we found that $u$ and $l$ are certainly close, and this difference can be made smaller still by computing more terms in the sequences. Although the focus of our work has been classical simulation, any attempts to optimise state spaces in generalised separable decompositions, or even classically simulate in any way, will eventually face obstacles coming from computational complexity conjectures. We have seen that following the approach of \cite{Terry,Dan}, if we take quantum mechanical inputs to cluster circuits, then the state of one particle may increase in radius conditioned on the outcome of measurements elsewhere, and indeed for some lattices we can steer qubits into $\ket{+}$ states. If our starting qubits have high enough radius, this can happen with sufficiently high probability that $BQP$ can be recovered using the percolation style arguments of \cite{Terry,Dan}. This ability to simulate $BQP$ through postselection could also in principle be the starting point of polynomial hierarchy obstacles \cite{TerhalD,harrow_quantum_2017,bremner_classical_2010,bremner_average-case_2016,aaronson_complexity-theoretic_2016} to classical simulation which could potentially apply for lower values of $r$ and lower degree. \newline \section{Stochastic representation} \label{AppA} To construct the stochastic representation let us initially assume we are only interested in input states with $z=1$, we will consider other values of $z$ shortly. Given an input radius $r$, define a standard `fiducial' extremal input with $z=1$, say $\rho_0(r) := (I + rX +Z)/2$ (it doesn't really matter which we pick). As any cylinder extremum with $z=1$ can be reached from a fiducial state by the action of a local $Z$ rotation, we can write the output separable decomposition in terms of fiducial states: \begin{equation} \label{fiducial} CZ(\rho_0(r) \otimes \rho_0(r)) = \sum_{i=1}^{K} p_i U_i \rho_0(\lambda r) U^{\dag}_i \otimes V_i \rho_0(\lambda r) V^{\dag}_i \end{equation} where $U_i$ and $V_i$ are local $Z$ rotations, and $p_i$ is a probability distribution. Now if we want to construct the action of the $CZ$ on two other input states $\sigma \otimes \omega$ (with $z=1$) we may simply express these new inputs in terms of the fiducial states, $\sigma \otimes \omega = S\rho_0 S^{\dag} \otimes W \rho_0 W^{\dag}$ for two $Z$ diagonal unitaries $S,W$, and apply the same separable decomposition because everything commutes with local $Z$ rotations: \begin{equation} CZ( \sigma \otimes \omega ) = \sum_{i=1}^{K} p_i U_i S \rho_0(\lambda r) S^{\dag} U^{\dag}_i \otimes V_i W \rho_0(\lambda r) W^{\dag} V^{\dag}_i \end{equation} This means that the action of the $CZ$, for inputs with $z=1$, can be represented by a radius growth by $\lambda$ and the ensemble of unitaries $\{p_i,U_i \otimes V_i\}$. We must now consider what happens if any of the inputs has $z=-1$. This can be accounted for in the separable decomposition (\ref{fiducial}) by changing all $z$ values to $-1$ for the inputs with $z=-1$, and applying a $Z$ rotation to the other particle. Hence for any inputs we may represent the $CZ$ by radius growth, the action of $\{p_i,U_i \otimes V_i\}$, and possibly extra $Z$ rotations where the input states have $z=-1$ (note that such additional $z$ dependence is unavoidable, because the $CZ$ gate can communicate information from one particle to the other, and an operation $\{p_i,U_i \otimes V_i\}$ with growth of $r$ by itself does not communicate from one particle to the other). \section{Purifying to $\ket{+}$ states within lattices} \label{appendix} In this appendix we show that a lattice of degree $5$ with input {\it quantum} pure states drawn from within ${\rm Cyl}(r_{max})$ with $r_{max}=0.84$ can be converted to a perfect 2D cluster state. Our approach uses similar arguments as used in \cite{Terry}, in which a state on a lattice is prepared by applying CZ gates, on edges of the lattice, to a product state where each qubit is close to $\ket{+}$. Subsequently, a local 2-outcome measurement is applied to each qubit on the lattice, which either disentangles the qubit from the lattice or projects it into a $\ket{+}$ state. It is then known from \cite{Dan}, that if the site occupation probability on the lattice is above a threshold $p_c$, then the resulting cluster state with holes is a universal resource for MBQC. Note that similar ideas were also used in \cite{mora_universal_2010}. In our model however, we do not permit remeasuring of qubits and we have further restrictions on the permitted measurements (i.e. we only use $Z$ basis measurements and $XY$ plane measurements), so we have to use a minor modification of previous arguments. Instead of describing the inputs in terms of $r$ will now use a quantum pure state description, as the inputs are taken from the surface of the Bloch sphere. The initial product state on the $n\times m$ lattice is \begin{equation} \ket{\psi_{n\times m}} = \bigotimes_{i=1}^{N} \left(\cos(\phi_i/2)\ket{0} + \sin(\phi_i/2)\ket{1}\right), \end{equation} where the index $i$ denotes the qubit site, $N=n m $ is the total number of qubits on the lattice and $0 \leq \phi_i \leq \phi_{max}$. A CZ gate is then applied to each edge on the lattice. Note the correspondence between the radius and angle is given by $r=\left|\sin{\phi}\right|$. The fidelity with the usual perfect cluster state is then $\prod_{i}^{N} (\frac{1+\sin{\phi_i}}{2})$, and the perfect cluster state is recovered when $\phi_i = \pi/2$. If the input qubits were not $\ket{+}$ states, this is not an ideal cluster state. However, one can show that by attaching and measuring at most three ancilla qubits to each qubit (see figure \ref{fig:lattice}), we can probabilistically prepare $\ket{+}$ 2D cluster states suitable for quantum computation. The starting lattice (including the ancilla qubits) is hence the degree $5$ graph illustrated in figure \ref{fig:lattice}. To see how to to perform universal quantum computation, consider the following sequence of operations. \begin{figure} \caption{This diagram illustrates how to prepare a single $\ket{+}$ state on the lattice via a linear chain. The linear chain, attached vertically to the 2D lattice, is built from ancilla qubits that are initialised with certain specified angles, which are then measured in the $X$-basis. In the method described in the text and an appendix, one linear chain is attached to each qubit on the $n\times m$ lattice. } \label{fig:lattice} \end{figure} \begin{enumerate} \item Prepare two ancilla qubits $\ket{\phi_1} $ and $ \ket{\phi_2}$, where $\ket{\phi_j} = \cos(\phi_j/2)\ket{0} + \sin(\phi_j/2)\ket{1}$, $\phi_j \in (0,2\pi)$ and the index $j$ denotes the qubit. Additionally, for technical reasons we impose the condition that $\phi_1 + \phi_2 = \frac{\pi}{2}$. \item Apply a CZ gate between ancilla qubits 1 and 2 and measure the 1st qubit in the X-basis. If the outcome $x_1 = 1$ is obtained, which occurs with probability $p_{x_1}(1) = \frac{1}{2}(1 - \sin{\phi_1}\cos{\phi_2})$, the post-measurement state of qubit 2 is $$ \begin{aligned} \ket{\phi_2^\prime} = &\frac{\left[ \cos(\phi_1/2) - \sin(\phi_1/2)\right]\cos(\phi_2/2)}{\sqrt{1-\sin{\phi_1}\cos{\phi_2}}}\ket{0} \\ + &\frac{\left[ \cos(\phi_1/2) + \sin(\phi_1/2)\right]\sin(\phi_2/2)}{\sqrt{1-\sin{\phi_1}\cos{\phi_2}}}\ket{1}. \end{aligned} $$ Therefore, if we pick $\phi_1 + \phi_2 = \frac{\pi}{2}$, the post-measurement state $\ket{\phi_2^\prime}$ becomes a $\ket{+}$ state. \item If the outcome $x_1 = 0$ is obtained, which occurs with probability $p_{x_1}(0) = \frac{1}{2}(1 + \sin{\phi_1}\cos{\phi_2})$, the post-measurement state of qubit 2 is $$ \begin{aligned} \ket{\phi_2^\prime} = &\frac{\left[ \cos(\phi_1/2) + \sin(\phi_1/2)\right]\cos(\phi_2/2)}{\sqrt{1+\sin{\phi_1}\cos{\phi_2}}}\ket{0} \\ + &\frac{\left[ \cos(\phi_1/2) - \sin(\phi_1/2)\right]\sin(\phi_2/2)}{\sqrt{1+\sin{\phi_1}\cos{\phi_2}}}\ket{1}. \end{aligned} $$ That is, $\ket{\phi_2}$ has undergone a rotation about the $Y$ axis toward the $\ket{0}$ state. \end{enumerate} If the outcome $x_1 = 1$ is obtained, then we have successfully produced a $\ket{+}$ state which is placed on the lattice. If the wrong outcome $x_1 = 0$ is obtained, then we initialise another ancilla qubit $\ket{\phi_3}$, such that $\phi_{2}^\prime + \phi_3 = \frac{\pi}{2}$, where $\phi_{2}^\prime$ is the angle of the post-measurement state of qubit 2. We then proceed to repeat the above procedure. That is, we apply a CZ gate between qubits 2 and 3, and measure qubit 2 in the $X$ basis. Similarly, if the outcome $x_2 = 1$ is obtained, with probability $p_{x_2}(1) = \frac{1}{2}(1 - \sin{\phi^{\prime}_2}\cos{\phi_3})$, then the post-measurement state of qubit 3 is $\ket{+}$. If outcome $x_2 = 0$ is obtained, which occurs with probability $p_{x_2}(0) = \frac{1}{2}(1 + \sin{\phi^{\prime}_2}\cos{\phi_3})$, then the post-measurement state of qubit 3 is $\ket{\phi^{\prime}_3}.$ By repeating this method, we can calculate the probability that the lattice site will be occupied by a $\ket{+}$ state. For example, repeating the method for three ancilla qubits, the probability is \begin{equation} p_{site} = p_{x_1}( 1 ) + p_{x_1}(0)\left[p_{x_2}( 1 ) + p_{x_2}( 0 ) p_{x_3}( 1 )\right] . \end{equation} In the case that a $\ket{+}$ has not been successfully prepared on the lattice by the ancilla chain, we measure the final qubit in the Z basis. This projects the qubit into the $\ket{0}$ (or $\ket{1}$) state which corresponds to creating a hole on the lattice, i.e. we have removed a vertex and edges from the cluster state. According to the percolation threshold $p_c = 0.5927\ldots$, if $p_{site} > p_c$ then by \cite{Dan} we can construct an efficient LOCC algorithm that creates a perfect cluster state from a 2D cluster state with holes. We find that by attaching and measuring three ancilla qubits, with angles $\phi_1 = 0.18\pi$, $\phi_2 = 0.32\pi$ and $\phi_3 = 0.31\pi$, we can prepare a $\ket{+}$ state on the lattice with probability $0.73$ which is above the percolation threshold $p_c$. The maximum angle required $\phi_2= 0.32\pi$ corresponds to $r_{max} = 0.84$. Therefore, we can prepare a $\ket{+}$ state with probability above the percolation threshold $p_c$, with three ancilla that are drawn from within ${\rm Cyl}(r_{max})$ with $r_{max} = 0.84$. \end{document}
arXiv
\begin{document} \date{} \title{ The Specular Derivative} \begin{abstract} In this paper, we introduce a new generalized derivative, which we term the specular derivative. We establish the Quasi-Rolles' Theorem, the Quasi-Mean Value Theorem, and the Fundamental Theorem of Calculus in light of the specular derivative. We also investigate various analytic and geometric properties of specular derivatives and apply these properties to several differential equations. \end{abstract} {\bf Key words}: generalization of derivatives, Fundamental Theorem of Calculus, Quasi-Mean Value Theorem, tangent hyperplanes, differential equations {\bf AMS Subject Classifications}: 26A24, 26A27, 26B12, 34A36 \section{Introduction} A derivative is a fundamental tool to measure the change of real-valued functions. The application of derivatives has been investigated in diverse fields beyond mathematics. Simultaneously, the generalization of derivatives has been studied in the fields of mathematical analysis, complex analysis, algebra, and geometry. The reason why we investigate to generalize derivatives is that the condition, such as continuity or smoothness or measurability, required to have differentiability is demanding. In this sense, we devote this paper to device a way to generalize a derivative in accordance with our intuition and knowledge. Let $f$ be a single-variable function defined on an interval $I$ in $\mathbb{R}$ and let $x$ be a point in $I$. In order to avoid confusion we call $f'$ a \emph{classical derivative} in this paper. When we say to generalize a derivative, the precise meaning is to find an operator which a classical derivative implies. Also, generalization of derivatives includes the relationship with Riemann or Lebesgue integration. There is a lot of ways to achieve this task: symmetric derivatives, subderivatives, weak derivatives, Dini derivatives, and so on. Extensive and well-organized survey for the foregoing discussion can be found in \cite{1966_Bruckner} and \cite{1994_Bruckner_BOOK}. Subderivatives are motivated the geometric properties of the tangent line with classical derivatives. The concept of subderivatives can be defined in abstract function spaces. Weak derivatives are motivated by the integration by parts formula and is related to the functional analysis. We are interested in the application of generalized derivatives to partial differential equations and refer to Evans \cite{2010_Evans_BOOK} and Bressan \cite{2013_Bressan_BOOK}. In order to refer to our study, we look over symmetric derivatives, denoted by $f^{\ast}$, made by changing the form of the different quotient. Since $f^{\ast}(x)$ dose not depend on the behavior of $f$ at the point $x$, the symmetric derivative $f^{\ast}(x)$ can exist even if $f(x)$ does not exist. Note that the existence of $f^{\ast}(x)$ does not imply the existence of $f'(x)$. However, if $f'(x)$ exists almost everywhere, then $f^{\ast}(x)$ exists almost everywhere. If $f$ and $f^{\ast}$ are continuous on an open interval $I$, then there exists $x\in I$ such that $f'(x)=f^{\ast}(x)$. Symmetric derivatives do not satisfy the classical Rolle's Theorem and Mean Value Theorem. As replacements so-called the Quasi-Rolle's Theorem and the Quasi-Mean Value Theorem for continuous functions was proved by Aull \cite{1967_Aull}. Larson \cite{1983_Larson} proved that the continuity in Quasi-mean value theorem can be replaced by measurability. As for Quasi-mean value theorem for symmetric derivatives, \cite{1998_Sahoo_BOOK} and \cite{2011_Sahoo} can be not only accessible but also extensive. Furthermore, according to Aull \cite{1967_Aull}, the Quasi-Mean Value Theorem implies that symmetric derivatives satisfy the property akin to a Lipschitz condition. In this paper, we device a new generalized derivative so-called a \emph{specular derivative} including the classical derivative in not only one-dimensional space $\mathbb{R}$ but also high-dimensional space $\mathbb{R}^{n}$. Also, we examine various analytic and geometric properties of specular derivatives and apply these properties in order to address several differential equations. To give an intuition, consider a function $f$ which is continuous at $x_0$ but not differentiable at $x_0$ in $I$ as in Figure \ref{Fig : Motivation for specular derivatives}. Imagine that you shot a light ray from left to right toward a certain mirror and then the light ray makes a turn along at the point $x_0$. The light ray can be represented as two lines $\text{T}_1$ with the right-hand derivative $f'_+( x_0 )$ and $\text{T}_2$ with the left-hand derivative $f'_-( x_0 )$ that just touch the function $f$ at the point $x_0$. Finally, the mirror must be the line $\text{T}$. Moreover, the angle between $\text{T}_1$ and $\text{T}$ is equal with the angle between $\text{T}_2$ and $\text{T}$. We define the slope of the line $\text{T}$ as the specular derivative of $f$ at $x_0$, written by $f^{\spd}(x_0)$. The word "specular" in specular derivatives stands for the mirror $\text{T}$. \begin{figure} \caption{Motivation for specular derivatives} \label{Fig : Motivation for specular derivatives} \end{figure} Here are our main results. The specular derivative is well-defined in $\mathbb{R}^{n}$ for each $n \in \mathbb{N}$. In one-dimensional space $\mathbb{R}$, we suggest three ways to calculate a specular derivative and prove that specular derivatives obey Quasi-Rolles' Theorem and Quasi-Mean Value Theorem. Interestingly, the second order specular differentiability implies the first order classical differentiability. The most noteworthy is the Fundamental Theorem of Calculus can be generalized in the specular derivative sense. By defining a tangent hyperplane in light of specular derivatives, we extend the concepts of specular derivatives in high-dimensional space $\mathbb{R}^{n}$ and provide several examples. Especially, we reveal that the directional derivative with specular derivatives is related to the gradient with specular derivatives and has extrema. As for differential equations, we construct and address the first order ordinary differential equation and the partial differential equation, called the transport equation, with specular derivatives. The rest of the paper is organized as follows. In Section 2, we define a specular derivative in one-dimensional space $\mathbb{R}$ and state properties of the specular derivative. Section 3 extends the concepts of the specular derivative to high-dimensional space $\mathbb{R}^{n}$. Also, the gradient and directional derivatives for specular derivatives are provided in Section 3. Section 4 deals with differential equations with specular derivatives. Starting from the Fundamental Theorem of Calculus with specular derivatives, Section 4 constructs and solves the first order ordinary differential equation and the transport equation with specular derivatives. Appendix contains delayed proofs, properties useful but elementary, and notations comparing classical derivatives and specular derivatives. \section{Specular derivatives for single-variable functions} Here is our blueprint for specular derivatives in one-dimensional space $\mathbb{R}$. In Figure \ref{Fig : The blueprint for specular derivatives one-dimension}, a function $f$ is specularly differentiable in a open interval $(a, b) \subset \mathbb{R}$ even if $f$ is not defined at a countable sequence $\alpha_1$, $\alpha_2$, $\cdots$, $\alpha_n$ and is not differentiable at some points. \begin{figure} \caption{The blueprint for specular derivatives in one-dimension} \label{Fig : The blueprint for specular derivatives one-dimension} \end{figure} \subsection{Definitions and properties} \begin{definition} Let $f:I \to \mathbb{R}$ be a single-variable function with an open interval $I \subset \mathbb{R}$ and $x_0$ be a point in $I$. Write \begin{equation*} f[x_0):=\lim_{x \searrow x_0}f(x) \qquad \text{and} \qquad f(x_0]:=\lim_{x \nearrow x_0}f(x) \end{equation*} if each limit exists. Also, we denote $f[ x_0 ]:=\frac{1}{2}\left(f[ x_0 ) + f( x_0 ] \right)$. \end{definition} \begin{definition} Let $f:I \to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$ and $x_0$ be a point in $I$. We say $f$ is \emph{right specularly differentiable} at $x_0$ if $x_0$ is a limit point of $I \cap [x_0, \infty)$ and the limit $$ \displaystyle f^{\spd}_{+}(x_0):= \lim_{x \searrow x_0}\frac{f(x)-f[x_0)}{x-x_0} $$ exists as a real number. Similarly, we say $f$ is \emph{left specularly differentiable} at $x_0$ if $x_0$ is a limit point of $I \cap (-\infty, x_0]$ and the limit $$ \displaystyle f^{\spd}_{-}(x_0):= \lim_{x \nearrow x_0}\frac{f(x)-f(x_0]}{x-x_0} $$ exists as a real number. Also, we call $f^{\spd}_{+}(x_0)$ and $f^{\spd}_{-}(x_0)$ the (\emph{first order}) \emph{right specular derivative} of $f$ at $x_0$ and the (\emph{first order}) \emph{left specular derivative} of $f$ at $x_0$, respectively. In particular, we say $f$ is \emph{semi-specularly differentiable} at $x_0$ if $f$ is right and left specularly differentiable at $x_0$. \end{definition} In Appendix \ref{Notation}, we suggest the notation for semi-specular derivatives and employ the notations in this paper. \begin{remark} Clearly, semi-differentiability implies semi-specular differentiability, while the converse does not imply. For example, the sign function is neither right differentiable nor left differentiable at $0$, whereas one can prove that $D^{R}\operatorname{sgn}(0)=0=D^L \operatorname{sgn}(0)$. \end{remark} \begin{definition} Let $I$ be an open interval in $\mathbb{R}$ and $x_0$ be a point in $I$. Suppose a function $f : I \to \mathbb{R}$ is semi-specularly differentiable at $x_0$. We define the \emph{phototangent} of $f$ at $x_0$ to be the function $\operatorname{pht}f:\mathbb{R}\to \mathbb{R}$ by \begin{equation*} \operatorname{pht}f(y)= \begin{cases} f^{\spd}_{-}(x_0)(y-x_0)+ f(x_0] & \text{if } y<x_0,\\ f[ x_0 ] & \text{if } y=x_0,\\ f^{\spd}_{+}(x_0)(y-x_0)+ f[x_0) & \text{if } y>x_0. \end{cases} \end{equation*} \end{definition} \begin{definition} Let $f:I \to \mathbb{R}$ be a function, where $I$ is a open interval in $\mathbb{R}$. Let $x_0$ be a point in $I$. Suppose $f$ is semi-specularly differentiable at $x_0$ and let $\operatorname{pht}f$ be the phototangent of $f$ at $x_0$. Write $\mathbf{x}_0 = \left( x_0, f[ x_0 ] \right) \in I \times \mathbb{R}$. \begin{enumerate}[label=(\roman*)] \rm \item The function $f$ is said to be \emph{specularly differentiable} at $x_0$ if $\operatorname{pht}f$ and a circle $\partial B\left(\mathbf{x}_0, r\right)$ have two intersection points for all $r>0$. \rm \item Suppose $f$ is specularly differentiable at $x_0$ and fix $r>0$. The (\emph{first order}) \emph{specular derivative} of $f$ at $x_0$, denoted by $f^{\spd} ( x_0 )$, is defined to be the slope of the line passing through the two distinct intersection points of $\operatorname{pht}f$ and the circle $\partial B\left(\mathbf{x}_0, r\right)$. \end{enumerate} \end{definition} In particular, if $f$ is specularly differentiable on a closed interval $[a, b]$, then we define specular derivatives at end points: $f^{\spd}(a):=f^{\spd}_+(a)$ and $f^{\spd}(b):=f^{\spd}_-(b)$. We say $f$ is specularly differentiable on an interval $I$ in $\mathbb{R}$ if $f$ is specularly differentiable at $x_0$ for all $x_0\in I$. Note that specular derivatives are translation-invariant. Also, if $f$ is specularly differentiable on an interval $I \subset \mathbb{R}$, then the set of all points at which $f$ has a removable discontinuity is at most countable since $f(x]$ and $f[x)$ exist for all $x \in I$. \begin{proposition} \label{Prop : specularly differentiability iff pht continuity} Let $f: I \to \mathbb{R}$ be a function for an open interval $I \subset \mathbb{R}$ and $x_0$ be a point in $I$. Suppose there exists a phototangent, say $\operatorname{pht}f$, of $f$ at $x_0$. Then $f$ is specularly differentiable at $x_0$ if and only if $\operatorname{pht}f$ is continuous at $x_0$. \end{proposition} \begin{proof} Write $\mathbf{x}_0=(x_0,\operatorname{pht}f(x_0))$. Let $r>0$ be a real number. Write a circle $\partial B(\mathbf{x}_0, r)$ as the equation: \begin{equation} \label{Circle centered bold a with radius r} (x-x_0)^2 + (y-\operatorname{pht}f(x_0))^2=r^2 \end{equation} for $x, y\in \mathbb{R}$. The system of \eqref{Circle centered bold a with radius r} and $\left. \operatorname{pht}f\right|_{[x_0,\infty)}$ has a root $a$ as well as the system of \eqref{Circle centered bold a with radius r} and $\left. \operatorname{pht}f\right|_{(-\infty, x_0]}$ has a root $b$: \begin{equation} \label{x of the intersection between the ball and pht} a := x_0 + \left(x_0^2 + \frac{r^2}{\left(f^{\spd}_{+}(x_0)\right)^2 + 1}\right)^{\frac{1}{2}} \qquad \text{and} \qquad b :=x_0 - \left(x_0^2 + \frac{r^2}{\left(f^{\spd}_{-}(x_0)\right)^2 + 1}\right)^{\frac{1}{2}}, \end{equation} using the quadratic formula. Notice that $b < a$. To prove that $\operatorname{pht}f$ is continuous at $x_0$, take $\delta:= \min \left\{ a - x_0, x_0-b \right\}$. Then $\delta>0$. If $z \in (x_0-\delta, x_0+\delta)$, then \begin{equation*} \left\vert \operatorname{pht}f(x_0) - \operatorname{pht}f(z)\right\vert =\left\vert f[ x_0 ] - \operatorname{pht}f(z)\right\vert \leq \begin{cases} \left\vert f[ x_0 ] - \operatorname{pht}f(b)\right\vert<r & \text{if } z\in (b, x_0],\\ \left\vert f[ x_0 ] - \operatorname{pht}f(a)\right\vert <r & \text{if } z \in [x_0, a), \end{cases} \end{equation*} which implies that $\operatorname{pht}f$ is continuous at $x_0$. Conversely, the system of \eqref{Circle centered bold a with radius r} and $\operatorname{pht}f$ has two distinct roots since $b<x_0<a$. Hence, we conclude that $f$ is specularly differentiable at $x_0$. \end{proof} \begin{corollary} \label{Crl : Linearity of phototangents} Let $f$ and $g$ be single valued functions on an open interval $I \subset \mathbb{R}$ containing a point $x_0$. Suppose $f$ and $g$ are specularly differentiable at $x_0$. Then $f+g$ is specularly differentiable at $x_0$ and $\operatorname{pht}f + \operatorname{pht}g = \operatorname{pht}(f+g)$. \end{corollary} \begin{example} The phototangent of the sign function at $0$ is itself, which is not continuous at $0$. Hence, the sign function is not specularly differentiable at $0$. \end{example} \begin{example} \label{Ex : Not uniqueness specular derivatives concerning SODE} The function $f:\mathbb{R} \to \mathbb{R}$ by $f(x)=|x|$ for $x\in \mathbb{R}$ is continuous and specularly differentiable on $\mathbb{R}$. In fact, $f^{\spd}(x) = \operatorname{sgn}(x)$ which is the sign function. Let $g:\mathbb{R}\to \mathbb{R}$ be the function defined by $g(x)=|x|$ if $x\neq 0$ and $g(x)=1$ if $x=0$. Note that $g$ is not continuous at $0$ but is specularly differentiable at $0$ with $g^{\spd}(0)=0$. Consequently, we have $f^{\spd}(x) = g^{\spd}(x) = \operatorname{sgn}(x)$ for all $x \in \mathbb{R}$. \end{example} \begin{definition} \label{Def : specular tangent line} Let $f : I \to \mathbb{R}$ be a function with an interval $I \subset \mathbb{R}$ and a point $x_0$ in $I$. Suppose $f$ is specularly differentiable at $x_0$. We define the \emph{specular tangent line} to the graph of $f$ at the point $\left( x_0, f[ x_0 ] \right)$, denoted by $\operatorname{stg}f$, to be the line passing through the point $\left( x_0, f[ x_0 ] \right)$ with slope $f^{\spd}( x_0 )$. \end{definition} \begin{remark} \label{rmk : properties of specular tanget line} In Definition \ref{Def : specular tangent line}, the specular tangent line is given by the function $\operatorname{stg}f : I \to \mathbb{R}$ by \begin{equation*} \operatorname{stg}f(x) = f^{\spd} ( x_0 )(x - x_0) + f[ x_0 ] \end{equation*} for $x\in I$. Also, the specular tangent has two properties: $f[ x_0 ] = \operatorname{stg}f( x_0 )$ and $f^{\spd}( x_0 ) = \left( \operatorname{stg}f \right)^{\spd} ( x_0 )$. \end{remark} In Figure \ref{Fig : basic concepts concerning specular derivatives}, the function $f$ is neither continuous at $x_0$ nor differentiable at $x_0$. Let $\operatorname{pht}f$ be the phototangent of $f$ at $x_0$. We can calculate the specular derivative whenever $\operatorname{pht} f$ is continuous at $x_0$. Imagine you shot a light ray toward a mirror. The words "specular" in specular tangent line and "photo" in phototangent stand for the mirror $\operatorname{stg}f$ and the light ray $\operatorname{pht}f$, respectively. Write $\text{C} = \left( x_0, f[ x_0 ] \right)$. Observing that \begin{equation*} \angle \text{CPQ} = \angle \text{CQP} = \angle \text{SCQ} = \angle \text{TCP}, \end{equation*} one can find that the slope of the line PQ and the slope of the phototangent of $f$ at $x_0$ are equal. \begin{figure} \caption{Basic concepts concerning specular derivatives} \label{Fig : basic concepts concerning specular derivatives} \end{figure} We suggest three avenues calculating a specular derivative. The first formula can be used as the criterion for the existence of specular derivatives. \begin{theorem} \label{Thm : specular derivatives criterion} \emph{(Specular Derivatives Criterion)} Let $f : I \to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$ and $x$ be a point in $I$. If $f$ is specularly differentiable at $x$, then \begin{equation*} f^{\spd}(x) = \lim_{h \to 0}\frac{\left( f(x + h) - f[x] \right) \sqrt{\left( f(x - h) - f[x] \right)^2 + h^2} -\left( f(x - h) - f[x] \right) \sqrt{\left( f(x + h) - f[x] \right)^2 + h^2}}{h \sqrt{\left( f(x - h) - f[x] \right)^2 + h^2} + h \sqrt{\left( f(x + h) - f[x] \right)^2 + h^2}}. \end{equation*} \end{theorem} \begin{proof} Set $\Gamma := \left\{ y - x : y \in I \right\}$ and define the function $g : \Gamma \to \mathbb{R}$ by $g(\gamma) = f(x + \gamma) - f[ x ]$ for $\gamma \in \Gamma$. Since specular derivatives are translation-invariant, we have $f^{\spd}( x ) = g^{\spd}\left( 0 \right)$. Hence, it suffices to prove that \begin{equation*} g^{\spd}(0) = \lim_{h \to 0} \frac{g(h) \sqrt{g(-h)^2 + h^2} - g(-h) \sqrt{g(h)^2 + h^2}}{h \sqrt{g(-h)^2 + h^2} + h \sqrt{g(h)^2 + h^2}}. \end{equation*} Let $h > 0$ be given. Fix $0 < r < h$. Since $g$ is specularly differentiable at zero, there exists a circle $\partial B(\text{O}, r)$ centered at the origin $\text{O}$ with radius $r$. Moreover, the circle $\partial B(0, r)$ has the two distinct points A and B intersecting the half-lines $\overrightarrow{\text{OC}}$ and $\overrightarrow{\text{OD}}$, respectively, where $\text{C} = \left( h, g(h) \right)$ and $\text{D} = \left( -h, g(-h) \right)$. See Figure \ref{Fig : The slope of the line AB converges the specular derivative of g at zero}. Observe the similar right triangles: \begin{equation*} \triangle \text{AFO} \sim \triangle \text{CEO} \qquad \text{and} \qquad \triangle \text{BGO} \sim \triangle \text{DHO}, \end{equation*} where points $\text{E} = \left( h , 0 \right)$, $\text{F} = \left( \text{A} \innerprd \mathbf{e}_1 , 0 \right)$, $\text{G} = \left( \text{B} \innerprd \mathbf{e}_1 , 0 \right)$, and $\text{H} = \left( h , 0 \right)$ with $\mathbf{e}_1 = \left( 1, 0 \right)$. One can find that \begin{equation*} \text{F} = \left( \frac{rh}{\sqrt{g(h)^2 + h^2}} , 0 \right) \qquad \text{and} \qquad \text{G} = \left( \frac{-rh}{\sqrt{g(-h)^2 + h^2}} , 0 \right), \end{equation*} using basic geometry properties for similar right triangles. Consider the continuous function $q : \mathbb{R} \to \mathbb{R}$ defined by \begin{equation*} q( z ) = \begin{cases} \displaystyle \frac{g(h)}{h} z & \text{if } z < 0,\\[0.2cm] 0 & \text{if } z = 0,\\ \displaystyle \frac{g(-h)}{h} z & \text{if } z > 0, \end{cases} \end{equation*} which passes through points $\text{D}$, $\text{B}$, $\text{O}$, $\text{A}$, and $\text{C}$. Using the function $q$, we find that \begin{equation*} \text{A} = \left(\frac{rh}{\sqrt{g(h)^2 + h^2}} , \frac{r g(h)}{\sqrt{g(h)^2 + h^2}} \right) \qquad \text{and} \qquad \text{B} = \left( \frac{-rh}{\sqrt{g(-h)^2 + h^2}} , \frac{rg(-h)}{\sqrt{g(-h)^2 + h^2}} \right). \end{equation*} Hence, the slope of the line $\text{AB}$ is \begin{equation*} \frac{g(h) \sqrt{g(-h)^2 + h^2} - g(-h) \sqrt{g(h)^2 + h^2}}{h \sqrt{g(-h)^2 + h^2} + h \sqrt{g(h)^2 + h^2}} = : \sigma(h). \end{equation*} Note that $\theta_1 = \angle \text{AOP}$ and $\theta_2 = \angle \text{BOQ}$ converges to zero as $h \to 0$, where P and Q are the intersection points of the circle $\partial B(\text{O}, r)$ and $\operatorname{pht}g$. The definition of the specular derivative yields that $\sigma(h)$ converges to $g^{\spd}(0)$ as $h \searrow 0$. Since the function $\sigma$ is even, we deduce that $\sigma(h)$ converges to $g^{\spd}\left( 0 \right)$ as $h \to 0$, completing the proof. \end{proof} \begin{figure} \caption{The slope of the line AB converges the specular derivative of $g$ at zero} \label{Fig : The slope of the line AB converges the specular derivative of g at zero} \end{figure} In the proof of Theorem \ref{Thm : specular derivatives criterion}, the observation that the function $\sigma$ is even can be generalized as follows: \vspace*{-0.5em} \begin{corollary} Let $f : I\to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$. Let $x_0$ be a point in $I$. Assume $f$ is specularly differentiable at $x_0$. If $f$ is symmetric about $x=x_0$ in a neighborhood of $x_0$, that is, there exists $\delta > 0$ such that \begin{equation*} f(x_0 - x) = f(x_0 + x) \end{equation*} for all $x \in \left( x_0 - \delta, x_0 + \delta \right)$. Then $f^{\spd} ( x_0 ) = 0$. \end{corollary} \begin{example} For the ReLU function $f(x)=\frac{1}{2}\left(x+|x|\right)$, one can calculate $f^{\spd}(0)=-1+\sqrt{2}$. \end{example} In order to calculate specular derivatives more conveniently we suggest the second formula using semi specular derivatives. \begin{proposition} \label{Prop : Calculating spd} Let $f:I\to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$ and $x_0$ be a point in $I$. Assume $f$ is specularly differentiable at $x_0$. Write $f^{\spd}_{+}(x_0)=:\alpha$ and $f^{\spd}_{-}(x_0)=:\beta$. Then, we have \begin{equation} \label{Prop : Calculating spd formula} f^{\spd} ( x_0 ) = \begin{cases} \displaystyle \frac{\alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)}}{\alpha+\beta} & \text{if } \alpha+\beta\neq 0,\\[0.45cm] 0 & \text{if } \alpha+\beta=0. \end{cases} \end{equation} \end{proposition} \begin{proof} See Appendix \ref{Prop : Calculating spd proof}. \end{proof} Since the formula in \eqref{Prop : Calculating spd formula} frequently appears on this paper, we state some statements for this formula in Appendix \ref{Lem : the function A for calculation of spd}. In fact, the following corollary can be driven directly. \begin{corollary} \label{Crl : Calculating spd} If $f$ is specularly differentiable at $x_0 \in \mathbb{R}$, then the following statements hold: \begin{enumerate}[label=(\roman*)] \rm\item \emph{$f^{\spd}( x_0 ) = 0$ if and only if $f^{\spd}_{+}(x_0) + f^{\spd}_{-}(x_0) = 0$.} \rm\item \emph{Signs of $f^{\spd}( x_0 )$ and $f^{\spd}_{+}(x_0) + f^{\spd}_{-}(x_0)$ are equal, i.e., $\operatorname{sgn}\left( f^{\spd}( x_0 ) \right) = \operatorname{sgn}\left( f^{\spd}_{+}(x_0) + f^{\spd}_{-}(x_0) \right)$.} \end{enumerate} \end{corollary} \begin{proof} The application of Proposition \ref{Prop : Calculating spd} and \ref{Lem : the function A - 1}, \ref{Lem : the function A - 2} in Lemma \ref{Lem : the function A} with respect to $\alpha = f^{\spd}_{+}(x_0)$ and $\beta = f^{\spd}_{-}(x_0)$ completes the proof (see Appendix \ref{Lem : the function A for calculation of spd}). \end{proof} \begin{remark} \label{Rmk : Specular derivatives may do not have linearity} Specular derivatives may do not have linearity. For instance, consider the ReLU function $f(x) =\frac{1}{2}\left( x + |x| \right)$ for $x \in \mathbb{R}$. First, we find that \begin{equation*} \left( 2 f \right)^{\spd}(0) = \frac{-1+\sqrt{5}}{2} \neq 2 \left( -1 + \sqrt{2} \right) = 2 f^{\spd}(0). \end{equation*} Also, take the smooth function $g(x) = 2x$ for $x \in \mathbb{R}$. Second, one can calculate that \begin{equation*} f^{\spd}(0) + g^{\spd}(0) = \left( -1 + \sqrt{2} \right) + 2 \neq \frac{1 + \sqrt{5}}{2} = \left. \frac{d}{d^{S}x} \left( \frac{1}{2} \left( 5x + \left\vert x \right\vert \right) \right) \right|_{x=0} = (f + g)^{\spd}(0). \end{equation*} Furthermore, specular derivatives may do not obey the Chain Rule. Consider the composite function $f \circ g : \mathbb{R} \to \mathbb{R}$. Writing $y = g(x)$, we have $x = 0$ if and only if $y = 0$. Then we find that \begin{equation*} \left.\frac{df}{d^{S}x}\right|_{x=0} = \frac{-1 + \sqrt{5}}{2} \neq \left( -1 + \sqrt{2} \right) 2 = \left. \frac{d}{d^S y} \left( \frac{1}{2} \left( y + \left\vert y \right\vert \right) \right) \right|_{y = 0} \left. \frac{d}{d^{S}x} \left( 2x \right) \right|_{x = 0} = \left.\frac{df}{d^{S}y} \right|_{y = 0} \left. \frac{dy}{d^Sx} \right|_{x = 0}. \end{equation*} \end{remark} As stated in the next theorem, the specular derivatives are generalization of classical derivatives. \begin{theorem} \label{Thm : ordinary dervatives and specular derivatives} Let $f: I \to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$ and a point $x_0 \in I$. \begin{enumerate}[label=(\roman*)] \rm\item \emph{If $f$ is differentiable at $x_0$, then $f$ is specularly differentiable at $x_0$ and $f'(x_0)=f^{\spd} ( x_0 )$.} \label{Item : differentiability implies specularly differentiability} \rm\item \emph{$f$ is differentiable at $x_0$ if and only if $f$ is continuous at $x_0$ and the phototangent of $f$ at $x_0$ is differentiable at $x_0$.} \label{Item : differentiability iff pht differentiability} \end{enumerate} \end{theorem} \begin{proof} To prove \ref{Item : differentiability implies specularly differentiability}, assume $f$ is differentiable at $x_0$. Then $f^{\spd}_{+}(x_0)=f^{\spd}_{-}(x_0)< \infty$ and $f$ is continuous at $x_0$. It is obvious that the phototangent of $f$ at $x_0$ is continuous at $x_0$, which implies that $f$ is specularly differentiable at $x_0$. If $f^{\spd}_{+}(x_0)=f^{\spd}_{-}(x_0)=0$, we see that \begin{equation*} f'(x_0) = f'_+( x_0 ) = f^{\spd}_+( x_0 ) = 0 = f^{\spd}( x_0 ), \end{equation*} using Proposition \ref{Prop : Calculating spd}. On the other hand, if $f^{\spd}_{+}(x_0)\neq 0$ and $f^{\spd}_{-}(x_0) \neq 0$, then $f^{\spd}_+ ( x_0 ) + f^{\spd}_- ( x_0 ) \neq 0$. Writing $\alpha := f^{\spd}_+ ( x_0 )$, one can calculate \begin{equation*} f'(x_0) = f'_+( x_0 ) = f^{\spd}_+( x_0 ) = \frac{\alpha^2 - 1 + \sqrt{\left( \alpha^2 + 1 \right)^2}}{2 \alpha} = f^{\spd}( x_0 ) \end{equation*} by applying Proposition \ref{Prop : Calculating spd} again. Now, let $\operatorname{pht}f$ be the phototangent of $f$ at $x_0$ to show $\ref{Item : differentiability iff pht differentiability}$. First, suppose $f$ is differentiable at $x_0$. Then $f$ is specularly differentiable at $x_0$ by \ref{Item : differentiability implies specularly differentiability}. Since $f^{\spd}_{+}(x_0)=f'_+(x_0)=f'_-(x_0)=f^{\spd}_{-}(x_0)$ and $f[ x_0 ]=f(x_0)$, we conclude that $\operatorname{pht}f$ is a polynomial of degree $1$ or less. Next, write $\operatorname{pht}f$ to indicate the phototangent of $f$ at $x_0$. Assume $f$ is continuous at $x_0$ and $\operatorname{pht}f$ is differentiable at $x_0$. Observing that $\left(\operatorname{pht}f\right)'_+(x_0)=\left(\operatorname{pht}f\right)'_-(x_0)$, we have $f'_+(x_0)=f'_-(x_0)$, which implies that $f$ is differentiable at $x_0$. \end{proof} \subsection{Application} Specular derivatives does not satisfy neither the classical Roll's Theorem nor the classical mean value theorem. Take the following function as the counterexample: $f:[-1, 1]\to \mathbb{R}$ defined by $f(x) = x + |x|$. \begin{lemma} \label{Lem : continuous and existence} Let $f$ be a continuous function on $[a, b] \subset \mathbb{R}$. Assume $f$ is specularly differentiable in $(a,b)$. Then the following properties hold: \vspace*{-0.5em} \begin{enumerate}[label=(\roman*)] \rm\item \emph{If $f(a)<f(b)$, then there exists $c_1 \in (a,b)$ such that $f^{\spd}(c_1) \geq 0$.} \rm\item \emph{If $f(a)>f(b)$, then there exists $c_2 \in (a,b)$ such that $f^{\spd}(c_2) \leq 0$.} \label{Lem : continuous and existence (b)} \end{enumerate} \end{lemma} \begin{proof} First of all, assume $f(a)<f(b)$. Throughout the proof, $k$ denotes a real number with $f(b)<k<f(a)$. Since the set $\left\{ x\in [a,b] : f(x)>k \right\}=:K$ is bounded below by $a$, we have $\inf K =: c_1$ satisfying $c_1 \neq a$, $c_1 \neq b$. Note that $x\in(c_1-h, c_1 +h)$ implies $f(x+h)>k$ and $f(x-h)\leq k$ for small $h>0$. Thanks to the continuity of $f$, we find that \begin{equation*} f^{\spd}_{+}(c_1) = f'_+(c_1) = \lim_{h \searrow 0} \frac{f(c_1 + h) - f(c_1)}{h} \geq \lim_{h \searrow 0} \frac{k - f(c_1)}{h} = 0 \end{equation*} and \begin{equation*} f^{\spd}_{-}(c_1) = f'_-(c_1) = \lim_{h \nearrow 0} \frac{f(c_1 + h) - f(c_1)}{h} =\lim_{h \searrow 0} \frac{f(c_1)- f(c_1 - h)}{h} \geq \lim_{h \searrow 0} \frac{f(c_1)- k}{h} = 0. \end{equation*} On the one hand, assume $f^{\spd}_{+}(c_1) + f^{\spd}_{-}(c_1)=0$. Then $f^{\spd}(c_1)=0$ due to Proposition \ref{Prop : Calculating spd}. On the other hand, suppose $f^{\spd}_{+}(c_1) + f^{\spd}_{-}(c_1) \neq 0$. One can estimate that \begin{equation*} f^{\spd}(c_1) \geq \frac{f'_+(c_1)f'_-(c_1)}{f'_+(c_1)+f'_-(c_1)} \geq 0, \end{equation*} using Proposition \ref{Prop : Calculating spd}. Hence, we conclude that $f^{\spd}(c_1) \geq 0$. Similarly, the proof of the reversed inequalities in \ref{Lem : continuous and existence (b)} can be shown. \end{proof} \begin{theorem} \label{Thm : Quasi-Rolle's Theorem} \emph{(Quasi-Rolle's Theorem)} Let $f : [a, b] \to \mathbb{R}$ be a continuous function on $[a,b]$. Suppose $f$ is specularly differentiable in $(a,b)$ and $f(a) = f(b) = 0$. Then there exist $c_1$, $c_2 \in (a,b)$ such that $f^{\spd}(c_2)\leq 0 \leq f^{\spd}(c_1)$. \end{theorem} \begin{proof} If $f \equiv 0$, the conclusion follows. Now, suppose $f \not\equiv 0$. The hypothesis implies three cases; there exists either $a^{\ast} \in (a,b)$ such that $f\left(a^{\ast}\right)>0$, or $b^{\ast} \in (a,b)$ such that $f \left(b^{\ast} \right) <0$, or both. If such $a^{\ast}$ exists, using Lemma \ref{Lem : continuous and existence} on $[a, a^{\ast}]$ and $[a^{\ast}, b]$ respectively, there exist $c_1 \in (a, a^{\ast})$ and $c_2 \in (a^{\ast}, b)$ such that $f^{\spd}(c_2) \leq 0 \leq f^{\spd}(c_1)$. The other remained cases can be shown in a similar way. \end{proof} In order to prove the Quasi-Mean Value Theorem for specular derivatives, as specular derivatives do not have linearity, we establish a strategy differing from the strategy used in the proof of the classical Mean Value Theorem or Quasi-Mean Value Theorem for symmetric derivatives in \cite{1967_Aull}. Before that, we suggest the third formula calculating specular derivatives. \begin{lemma} \label{Lmm : average of angle} Let $f:I \to \mathbb{R}$ be a function, where $I$ is an open interval in $\mathbb{R}$. Let $x_0$ be a point in $I$. Suppose $f$ is specularly differentiable at $x_0$. Then \begin{equation*} \theta = \frac{\theta_1 + \theta_2}{2}, \end{equation*} where $f^{\spd}_+ ( x_0 )=\tan \theta_1$, $f^{\spd}_- ( x_0 )=\tan \theta_2$ and $f^{\spd} ( x_0 )=\tan \theta$ for $\theta_1$, $\theta_2$, $\theta \in \left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$. \end{lemma} \begin{proof} See Appendix \ref{Lmm : average of angle proof}. \end{proof} \begin{theorem} \label{Thm : Quasi-Mean Value Theorem} \emph{(Quasi-Mean Value Theorem)} Let $f : [a, b] \to \mathbb{R}$ be a continuous function on $[a,b] \subset \mathbb{R}$. Assume $f$ is specularly differentiable in $(a, b)$. Then there exist points $c_1$, $c_2 \in (a, b)$ such that \begin{equation*} f^{\spd}(c_2) \leq \frac{f(b)- f(a)}{b-a} \leq f^{\spd}(c_1). \end{equation*} \end{theorem} \begin{proof} Write $(f(b)-f(a))/(b-a)=:A$. We consider three cases: $f(a)=f(b)$, $f(a)>f(b)$ and $f(a)<f(b)$. For starters, suppose $f(a)=f(b)$. Let $\phi :[a,b] \to \mathbb{R}$ be a function defined by \begin{equation*} \phi(x) = f(x)- f(a) \end{equation*} for $x \in [a,b]$. Clearly, $\phi$ is continuous on $[a,b]$ and specularly differentiable in $(a,b)$. Observing that $\phi(a)=\phi(b)=0$ and $A=0$, we see that there exist points $c_1$, $c_2 \in [a, b]$ such that $\phi^{\spd}(c_2) \leq A \leq \phi^{\spd}(c_1)$ by Theorem \ref{Thm : Quasi-Rolle's Theorem}. Since $\phi^{\spd}(x) = f^{\spd}(x)$ for all $x \in [a, b]$, one can deduce that $f^{\spd}(c_2) \leq A \leq f^{\spd}(c_1)$. Next, assume $f(a) < f(b)$. Define the function $\psi :[a,b] \to \mathbb{R}$ by \begin{equation*} \psi (x) = A(x-a) + f(a) \end{equation*} and the set $\Psi :=\left\{ x \in [a,b] : f(x)> \psi(x)\right\}$. Then there exist $\inf \Psi =: c_1$ and $\sup \Psi =:c_2$. First, notice that $f(c_1 + h) > \psi(c_1 + h)$, $f(c_1 - h) \leq \psi(c_1 - h)$ for small $h >0$ as well as $f(c_1) \leq \psi(c_1)$. Write $f^{\spd}_+(c_1)=\tan \theta_1$, $f^{\spd}_-(c_1)=\tan \theta_2$ and $A=\tan \theta_0$, where $\theta_i \in \left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$ for each $i=0$, $1$, $2$. Observe that \begin{equation*} \tan \theta_1=f^{\spd}_{+}(c_1) = \lim_{h \searrow 0} \frac{f(c_1 + h) - f(c_1)}{h} \geq \lim_{h \searrow 0} \frac{\psi(c_1+h) - \psi(c_1)}{h} = A = \tan \theta_0 \end{equation*} and \begin{equation*} \tan \theta_2=f^{\spd}_{-}(c_1) = \lim_{h \nearrow 0} \frac{f(c_1 + h) - f(c_1)}{h} \geq \lim_{h \nearrow 0} \frac{\psi(c_1 + h) - \psi(c_1)}{h} = \lim_{h \searrow 0} \frac{\psi(c_1) - \psi(c_1-h) }{h} = A = \tan \theta_0, \end{equation*} which implies that $\theta_1 + \theta_2 \geq 2\theta_0$. Writing $f^{\spd}(c_1)=\tan \theta$ for some $\theta \in \left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$, we attain \begin{equation*} \theta = \frac{\theta_1 + \theta_2}{2} \geq \theta_0. \end{equation*} Applying Lemma \ref{Lmm : average of angle}, we conclude that $f^{\spd} (c_1) \geq A$. Second, as the same argument is valid with respect to $c_2$, one can find that $f^{\spd}(c_2) \leq A$. Similarly, the remaining case $f(a) > f(b)$ can be proven. \end{proof} Even if a continuous function $f$ may not be bounded, $f$ can satisfy the Lipschitz condition provided $f^{\spd}$ is bounded. \begin{corollary} Let $f : (a, b) \to \mathbb{R}$ be a continuous function on $(a,b)$. Assume $f^{\spd}$ is bounded on $(a,b)$. Let $x_1$, $x_2$ be points in $(a, b)$. Then there exists a constant $M > 0$ such that \begin{equation*} \left| f(x_1)-f(x_2) \right| \leq M |x_1-x_2|, \end{equation*} where $M$ is independent of $x_1$ and $x_2$. \end{corollary} \begin{proof} Since $f^{\spd}$ is bounded, $\left|f^{\spd} (x)\right| \leq M$ for some constant $M > 0$ for any $x \in (a,b)$. By Theorem \ref{Thm : Quasi-Mean Value Theorem}, we have \begin{equation*} -M \leq \frac{f(x_1)-f(x_2)}{x_1-x_2} \leq M \end{equation*} for any points $x_1$, $x_2\in (a,b)$, as required. \end{proof} Applying the Quasi-Mean Value Theorem for specular derivatives, one can find that the continuity of $f^{\spd}$ at a point $x_0$ and the continuity of $f$ on a neighborhood of the point $x_0$ entail the existence of $f'(x_0)$. To achieve this, we first suggest the weaker proposition as follows. \begin{proposition} \label{Prop: continuity of specular derivatives weak version} Let $f:(a, b) \to \mathbb{R}$ be a function. Assume $f$ is specularly differentiable in $(a,b)$. Suppose $f$ and $f^{\spd}$ is continuous on $(a,b)$. Then for each point $x \in (a,b)$ there exists $f'(x)$ and $f'(x) = f^{\spd}(x)$. \end{proposition} \begin{proof} Let $x$ be a point in $(a,b)$. Choose $h > 0$ to be sufficiently small so that $(x-h, x+h) \subset (a,b)$. Applying Theorem \ref{Thm : Quasi-Mean Value Theorem} to $f$ on $[x,x+h]$, there exist points $c_1$, $c_2$ in $(x, x+h)$ such that \begin{equation*} f^{\spd} ( c_2 ) \leq \frac{f(x+h) - f(x)}{h} \leq f^{\spd} ( c_1 ). \end{equation*} Thanks to the Intermediate Value Theorem for the continuous function $f^{\spd}$, there exists a point $c_3 \in (x, x+h)$ such that \begin{equation*} f^{\spd}(c_3) = \frac{f(x+h) - f(x)}{h}. \end{equation*} Taking the limit of both sides as $h \to 0$, we see that \begin{equation*} f^{\spd}(x) = \lim_{h \to 0} f^{\spd} ( c_3 ) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = f'( x ), \end{equation*} as required. \end{proof} Here we state the stronger theorem compared to the above proposition. \begin{theorem} \label{Thm : continuity of specular derivatives} Let $x_0$ be a point in $\mathbb{R}$. Let $f: \mathbb{R} \to \mathbb{R}$ be a function to be specularly differentiable at $x_0$. Suppose $f$ is continuous in a neighborhood of $x_0$ and $f^{\spd}$ is continuous at $x_0$. Then $f'(x_0)$ exists and $f'(x_0) = f^{\spd} ( x_0 )$. \end{theorem} \begin{proof} Let $\varepsilon >0$ be given. Using the continuity of $f^{\spd}$ at $x_0$, choose $B_{\delta}(x_0)$ to be a neighborhood of $x_0$ with $\delta >0$ such that $f$ is continuous at $x$ and \begin{equation} \label{Thm : continuity of specular derivatives - 1} f^{\spd} ( x_0 ) - \varepsilon < f^{\spd}(x) < f^{\spd} ( x_0 ) + \varepsilon \end{equation} whenever a point $x \in B_{\delta}( x_0 )$. Choose $h > 0$ to be sufficiently small so that $(x_0-h, x_0+h) \subset B_{\delta}(x_0)$. Owing to Theorem \ref{Thm : Quasi-Mean Value Theorem} to $f$ on $[x_0,x_0+h]$, there exist points $c_1$, $c_2$ in $(x_0, x_0+h)$ such that \begin{equation*} f^{\spd} ( c_2 ) \leq \frac{f(x_0+h) - f(x_0)}{h} \leq f^{\spd} ( c_1 ). \end{equation*} Since $c_1$ and $c_2$ are in $B_{\delta}(x_0)$, we finally obtain that \begin{equation*} f^{\spd} ( x_0 ) - \varepsilon < \frac{f(x_0+h) - f(x_0)}{h} < f^{\spd} ( x_0 ) + \varepsilon \end{equation*} from \eqref{Thm : continuity of specular derivatives - 1}, as required. \end{proof} \subsection{Higher order specular derivatives} Naturally, one can try to extend an order of specular derivatives as classical derivatives. Let $f : I \to \mathbb{R}$ be a function, where $I \subset \mathbb{R}$ is an open interval containing a point $x_0$. Writing $f^{[1]} := f^{\spd}$, for each positive integer $n \geq 2$, we recursively define the $n$-\emph{th order specularly derivative} of $f$ at $x_0$ as \begin{equation*} f^{[n]}(x_0):= \left(f^{[n-1]}\right)^{\spd}(x_0) \end{equation*} if these specular derivatives exist. Also, we suggest the notation of higher order specularly derivatives in Appendix \ref{Notation}. Especially, we write the \emph{second order specularly derivative} of $f$ at $x_0$ by \begin{equation*} f^{\spd \spd}(x_0) := \left( f^{\spd} \right)^{\spd} (x_0). \end{equation*} The bottom line is that the second order specularly differentiability of a continuous function implies the classical differentiability. \begin{proposition} \label{Prop : double specular derivatives} Let $f : I \to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$. If $f$ is continuous on $I$ and there exists $f^{\spd \spd}(x)$ for all $x \in I$, then $f^{\spd}$ is continuous on $I$. \end{proposition} \begin{proof} Let $x_0$ be a point in $I$. We claim that $f^{\spd}$ is continuous at $x_0$. Let $\displaystyle \lim_{x \to x_0} f^{\spd}(x) =: \alpha$. Let $\varepsilon > 0$ be given. Then there exists $\delta > 0$ such that \begin{equation} \label{Prop : double specular derivatives proof - 1} \left\vert f^{\spd} (x)- \alpha \right\vert < \varepsilon \end{equation} whenever $0 < \left\vert x - x_0 \right\vert < \delta$. From Lemma \ref{Lmm : average of angle}, we know that either \begin{equation*} f^{\spd}_-( x_0 ) \leq f^{\spd}( x_0 ) \leq f^{\spd}_+( x_0 ) \qquad \text{or} \qquad f^{\spd}_+ ( x_0 ) \leq f^{\spd}( x_0 ) \leq f^{\spd}_- ( x_0 ), \end{equation*} using the fact that the tangent function is increasing. Without loss of generality, assume \begin{equation} \label{Prop : double specular derivatives proof - 2} f^{\spd}_-( x_0 ) \leq f^{\spd}( x_0 ) \leq f^{\spd}_+( x_0 ). \end{equation} Each definition of $f^{\spd}_+ ( x_0 )$ and $f^{\spd}_- ( x_0 )$ implies \begin{equation} \label{Prop : double specular derivatives proof - 3} f^{\spd}_+ ( x_0 ) \leq \frac{f(x_1) - f(x_0)}{x_1 - x_0} + \varepsilon \qquad \text{and} \qquad \frac{f(x_2) - f(x_0)}{x_2 - x_0} - \varepsilon \leq f^{\spd}_- ( x_0 ) \end{equation} for some $x_1 \in ( x_0, x_0 + \delta )$ and $x_2 \in \left( x_0 - \delta, x_0 \right)$, respectively. Applying twice Theorem \ref{Thm : Quasi-Mean Value Theorem} to $f$ on $\left[ x_0, x_1 \right]$ and $\left[ x_2, x_0 \right]$, there exist $x_1^{\ast} \in \left( x_0, x_1 \right)$ and $x_2^{\ast} \in \left( x_2, x_0 \right)$ such that \begin{equation} \label{Prop : double specular derivatives proof - 4} \frac{f(x_1) - f(x_0)}{x_1 - x_0} \leq f^{\spd}( x_1^{\ast} ) \qquad \text{and} \qquad f^{\spd}( x_2^{\ast} ) \leq \frac{f(x_2) - f(x_0)}{x_2 - x_0}. \end{equation} Combining inequalities in \eqref{Prop : double specular derivatives proof - 2}, \eqref{Prop : double specular derivatives proof - 3}, and \eqref{Prop : double specular derivatives proof - 4}, we obtain \begin{equation} \label{Prop : double specular derivatives proof - 5} f^{\spd}( x_2^{\ast} ) - \varepsilon \leq f^{\spd}( x_0 ) \leq f^{\spd}( x_1^{\ast} ) + \varepsilon. \end{equation} Since $x_0 - \delta < x_2^{\ast} < x_0 < x_1^{\ast} < x_0 + \delta$, we find that \begin{equation*} f^{\spd}( x_1^{\ast} ) < \alpha + \varepsilon \qquad \text{and} \qquad \alpha - \varepsilon < f ^{\spd}( x_2^{\ast} ) \end{equation*} from \eqref{Prop : double specular derivatives proof - 1}. Combining with \eqref{Prop : double specular derivatives proof - 5} yields that \begin{equation*} \alpha - 2 \varepsilon < f ^{\spd} ( x_0 ) <\alpha + 2 \varepsilon. \end{equation*} Since $\varepsilon > 0$ was arbitrary, we have \begin{equation*} f^{\spd}( x_0 ) = \alpha = \lim_{x \to x_0} f^{\spd} ( x ). \end{equation*} Consequently, we conclude that $f ^{\spd}$ is continuous at $x_0$. \end{proof} \begin{theorem} Let $f : I \to \mathbb{R}$ be a function with an open interval $I \subset \mathbb{R}$. Suppose $f$ is continuous on $I$ and there exists $f^{\spd \spd}(x)$ for all $x \in I$. Then there exists $f'(x)$ and $f'(x) = f^{\spd}(x)$ whenever $x \in I$. \end{theorem} \begin{proof} Proposition \ref{Prop: continuity of specular derivatives weak version} and Proposition \ref{Prop : double specular derivatives} yield the conclusion of this theorem. \end{proof} Instead, what we are interested in is how to define specular derivatives in high-dimensions and their properties. We discuss this topic in the next section. \section{Specular derivatives for multi-variable functions} In stating specular derivatives and their properties in high-dimensional space $\mathbb{R}^{n}$, we mainly refer to \cite{2012_Colley_BOOK}. \subsection{Definitions and properties} \begin{definition} \label{Def : high-dimensions limits} Let $f:U \to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^n$. Let $\mathbf{x} =( x_{1}, x_2, \cdots, x_{n} )$ denote a point of $\mathbb{R}^{n}$. Let $\mathbf{a} = ( a_1, a_2, \cdots, a_n )$ be a point in $U$. For $1 \leq i \leq n$, we define \begin{equation*} f[\mathbf{a})_{(i)} := f[a_1, a_2, \cdots, a_n)_{(i)} := \lim_{h \searrow 0}f(\mathbf{a}+h\mathbf{e}_i) \qquad \text{and} \qquad f(\mathbf{a}]_{(i)} := f(a_1, a_2, \cdots, a_n]_{(i)} := \lim_{h \nearrow 0}f(\mathbf{a}+h\mathbf{e}_i) \end{equation*} if each limit exists, where $\mathbf{e}_{i}$ is the $i$-th standard basis vector of $\mathbb{R}^{n}$. Also we denote $f[\mathbf{a}]_{(i)} := \frac{1}{2}\left(f[\mathbf{a})_{(i)} + f(\mathbf{a}]_{(i)}\right)$ and \begin{equation*} \overline{\mathbf{a}}_{(i)} :=\left( \mathbf{a}, f [\mathbf{a}]_{(i)} \right) := \left(a_1, a_2, \cdots, a_n, f [\mathbf{a}]_{(i)} \right), \end{equation*} where $1 \leq i \leq n$. In particular, if $f [\mathbf{a}]_{(1)} = f [\mathbf{a}]_{(2)} = \cdots = f [\mathbf{a}]_{(n)}$, we write the common value as $f [\mathbf{a}]$. \end{definition} \begin{definition} \label{Def : high-dimensions specularly partial derivatives} Let $U$ be an open subset of $\mathbb{R}^{n}$ and $f: U \rightarrow \mathbb{R}$ be a function. Let $\mathbf{x} =( x_{1}, x_2, \cdots, x_{n} )$ denote a point of $\mathbb{R}^{n}$. Let $\mathbf{a}=\left(a_{1},a_2, \cdots, a_{n}\right) $ be a point in $U$. For $1\leq i \leq n$, we define the (\emph{first order}) \emph{right specularly partial derivative} of $f$ at $\mathbf{a}$ with respect to the variable $x_i$ to be the limit $$ \displaystyle \partial^{R}_{x_i}f(\mathbf{a}):= \lim_{h \searrow 0}\frac{f(\mathbf{a} + h \mathbf{e}_i)-f[\mathbf{a})_{(i)}}{h} $$ as a real number. Similarly, we say $f$ is the (\emph{first order}) \emph{left specularly partial derivative} of $f$ at $\mathbf{a}$ with respect to the variable $x_i$ to be the limit $$ \displaystyle \partial^{L}_{x_i} f(\mathbf{a}):= \lim_{h \nearrow 0}\frac{f(\mathbf{a} + h \mathbf{e}_i)-f(\mathbf{a}]_{(i)}}{h} $$ as a real number. Especially, we say $f$ is (\emph{first order}) \emph{semi-specularly partial differentiable} at $\mathbf{a}$ with respect to the variable $x_i$ if there exist both $\partial^{R}_{x_i} f(\mathbf{a})$ and $\partial^{L}_{x_i} f(\mathbf{a})$. \end{definition} We suggest the notation of semi-specularly partial derivatives in Appendix \ref{Notation}. Furthermore, consider one-dimension $\mathbb{R}$ together with abused notation $\partial^{R}_{x}f(\mathbf{a})= f^{\spd}_{+}(a)$ and $\partial^{L}_{x}f(\mathbf{a})= f^{\spd}_{-}(a)$ where $\mathbf{a} := \left( a \right) :=a \in \mathbb{R}$. In this context, Definition \ref{Def : high-dimensions limits} and \ref{Def : high-dimensions specularly partial derivatives} make sense in extending semi-specular derivatives from one-dimension to high-dimensions. \begin{example} Consider the function $f:\mathbb{R}^{2} \to \mathbb{R}$ defined by \begin{equation*} f(x, y) = \begin{cases} \left\vert x + y \right\vert & \text{if } x + y \neq 0,\\ -1 & \text{if } x + y = 0, \end{cases} \end{equation*} for $(x, y) \in \mathbb{R}^{2}$. Define the set $W:= \left\{ \left( w_1, w_2 \right) \in \mathbb{R}^{2} : w_1 + w_2 = 0 \right\}$. Let $\mathbf{w}=(w_1, w_2)$ be a point in $W$. Writing $x = x_1$ and $y = x_2$, one can compute \begin{equation*} f[\mathbf{w})_{(1)} = \lim_{h \searrow 0} |w_1 + h + w_2| = 0 = \lim_{h \nearrow 0} |w_1 + h + w_2| = f(\mathbf{w}]_{(1)} \end{equation*} so that \begin{equation*} \partial^{R}_{x} f(\mathbf{w}) = \lim_{h \searrow 0} \frac{f(w_1+h,w_2)-f[w_1, w_2)_{(1)}}{h} = \lim_{h \searrow 0} \frac{|w_1 + h + w_2|}{h}= \lim_{h \nearrow 0} \frac{| h |}{h}=1 \end{equation*} and \begin{equation*} \partial^{L}_{x} f(\mathbf{w}) = \lim_{h \nearrow 0} \frac{f(w_1+h,w_2)-f[w_1, w_2)_{(1)}}{h} = \lim_{h \nearrow 0} \frac{|w_1 + h + w_2|}{h}= \lim_{h \nearrow 0} \frac{| h |}{h}=-1. \end{equation*} \end{example} To define specular derivatives in high-dimensions, it needs to define phototangents in high-dimensions first. We naturally define the $n$-dimensional version of phototangents to enable one to still apply the properties of specular derivatives in one-dimension. \begin{definition} Suppose that $U$ is an open subset of $\mathbb{R}^{n}$ and $f: U \rightarrow \mathbb{R}$ is a function. Let $\mathbf{x} =( x_{1}, x_2, \cdots, x_{n} )$ denote a point of $\mathbb{R}^{n}$ and let $\mathbf{a}=\left(a_{1},a_2, \cdots, a_{n}\right) $ be a point in $U$. For $1 \leq i \leq n$, write $\mathbf{e}_{i}$ for the $i$-th standard basis vector of $\mathbb{R}^{n}$. \begin{enumerate}[label=(\roman*)] \rm\item For $1 \leq i \leq n$, we define the \emph{section} of the domain $U$ of the function $f$ by the point $\mathbf{a}$ with respect to the variable $x_i$ to be the set \begin{equation*} U_{x_i}( \mathbf{a} ) := \left\{ \mathbf{x} \in U : \mathbf{x} \innerprd \mathbf{e}_j = \mathbf{a} \innerprd \mathbf{e}_j \text{ for all } 1 \leq j \leq n \text{ with } j \neq i \right\}. \end{equation*} \rm\item For $1 \leq i \leq n$, assume $f$ is semi-specularly partial differentiable at $\mathbf{a}$ with respect to the variable $x_i$. We define a \emph{phototangent} of $f$ at $\mathbf{a}$ with respect to the variable $x_i$ to be the function $\operatorname{pht}_{x_i}f : \mathbb{R}^n_{x_i}( \mathbf{a} ) \to \mathbb{R}$ defined by \begin{equation*} \operatorname{pht}_{x_{i}}f(\mathbf{y})= \begin{cases} \partial^{L}_{x_i}f(\mathbf{a})\left(\mathbf{y} \innerprd \mathbf{e}_i -\mathbf{a} \innerprd \mathbf{e}_i \right)+ f(\mathbf{a}]_{(i)} & \text{if } \mathbf{y} \innerprd \mathbf{e}_i < \mathbf{a} \innerprd \mathbf{e}_i,\\ f[\mathbf{a}]_{(i)} & \text{if } \mathbf{y} \innerprd \mathbf{e}_i = \mathbf{a} \innerprd \mathbf{e}_i,\\ \partial^{R}_{x_i}f( \mathbf{a} )\left( \mathbf{y} \innerprd \mathbf{e}_i -\mathbf{a} \innerprd \mathbf{e}_i \right)+ f[\mathbf{a})_{(i)} & \text{if } \mathbf{y} \innerprd \mathbf{e}_i > \mathbf{a} \innerprd \mathbf{e}_i, \end{cases} \end{equation*} for $\mathbf{y} \in \mathbb{R}^n_{x_i}( \mathbf{a} )$. \end{enumerate} \end{definition} In case three dimensions, for instance, consider a function $f:U\to \mathbb{R}$ with an open set $U \subset \mathbb{R}^{2}$ and the variables $x=x_1$, $y=x_2$ as in Figure \ref{Basic concepts concerning specularly partial derivatives}. If $f$ is semi-specularly partial differentiable at $\mathbf{a}$ with respect to $x$ and $y$, the figure illustrates the sections of the domain by $\mathbf{a}$ and phototangents of $f$ at $\mathbf{a}$ with respect to $x$ and $y$. \begin{figure} \caption{Basic concepts concerning specularly partial derivatives} \label{Basic concepts concerning specularly partial derivatives} \end{figure} \begin{definition} \label{Def : partial specularly derivatives in high-dimensions} Let $f:U \to \mathbb{R}$ be a function, where $U$ is an open subset of $\mathbb{R}^n$. Let $\mathbf{x}=( x_1, x_2, \cdots, x_n )$ denote a typical point of $\mathbb{R}^{n}$. Let $\mathbf{a}$ be a point in $U$. For $1 \leq i \leq n$, suppose $f$ is semi-specularly partial differentiable at $\mathbf{a}$ with respect to the variable $x_i$ and let $\operatorname{pht}_{x_i}f$ be the phototangent of $f$ at $\mathbf{a}$ with respect to the variable $x_i$. We define as follows: \begin{enumerate}[label=(\roman*)] \rm \item The function $f$ is said to be \emph{specularly partial differentiable} at $\mathbf{a}$ with respect to the variable $x_i$ if $\operatorname{pht}_{x_i}f$ and a sphere $\partial B\left(\overline{\mathbf{a}}_{(i)}, r\right)$ have two intersection points for all $r>0$. \rm \item Suppose $f$ is specularly differentiable at $\mathbf{a}$ with respect to the variable $x_i$ and fix $r>0$. The (\emph{first order}) \emph{specular partial derivative} of $f$ at $\mathbf{a}$ with respect to the variable $x_i$, denoted by $\partial_{x_i}^S f(\mathbf{a})$, is defined to be the slope of the line passing through the two distinct intersection points of $\operatorname{pht}_{x_i}f$ and a sphere $\partial B\left(\overline{\mathbf{a}}_{(i)}, r\right)$. \end{enumerate} \end{definition} In Appendix \ref{Notation} we suggest the notation for specular partial derivatives. \begin{remark} If $f$ is specularly differentiable at $\mathbf{a}$ with respect to variable $x_i$, Theorem \ref{Thm : specular derivatives criterion} justifies the following extension that \begin{equation} \label{Rmk : specularly partial derivatives formula} \partial^{S}_{x_i} f ( \mathbf{a} ) = \lim_{h \to 0}\frac{g (h) \sqrt{\left( g (h) \right)^2 + h^2} - g (h) \sqrt{\left( g (h) \right)^2 + h^2}}{h \sqrt{\left( g (h) \right)^2 + h^2} + h \sqrt{\left( g (h) \right)^2 + h^2}}, \end{equation} where $g (h) = f\left( \mathbf{a} + h \mathbf{e}_i \right) - f[\mathbf{a}]$. \end{remark} From now on, we generalize a tangent plane in light of specular derivatives. Recall that a hyperplane in $n$-dimensions is determined with at least $n+1$ points. We later define certain tangents in the specular derivatives sense in high-dimensions by using these hyperplanes. \begin{definition} \label{Def : specularly differentiability for multi-variables} Let $f:U\to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. Let $\mathbf{x}=( x_1, x_2, \cdots, x_n )$ denote a typical point of $\mathbb{R}^{n}$. \begin{enumerate}[label=(\roman*)] \rm\item We write $\mathcal{V}( f, \mathbf{a} )$ for the set containing all indices $i$ of variables $x_i$ such that $f$ is specularly partial differentiable at $\mathbf{a}$ with respect to $x_i$ for $1 \leq i \leq n$. \rm\item Let $\mathcal{P} (f, \mathbf{a})$ denote the set containing all intersection points of the phototangent of $f$ at $\mathbf{a}$ with respect to $x_i$ and a sphere $\partial B\left(\overline{\mathbf{a}}_{(i)}, 1\right)$ for each $i \in \mathcal{V}( f, \mathbf{a} )$. \label{Def : specularly differentiability for multi-variables (b)} \rm\item If $\left\vert \mathcal{P} (f, \mathbf{a}) \right\vert \geq n+1$ and $f[\mathbf{a}]_{(i)} = f[\mathbf{a}]_{(j)}$ for all $i$, $j\in \mathcal{V}( f, \mathbf{a} )$, we say that $f$ is \emph{weakly specularly differentiable at} $\mathbf{a}$. \rm\item If $\left\vert \mathcal{P} (f, \mathbf{a}) \right\vert = 2n$ and $f[\mathbf{a}]_{(1)} = f[\mathbf{a}]_{(2)} = \cdots = f[\mathbf{a}]_{(n)}$, we say that $f$ is (\emph{strongly}) \emph{specularly differentiable at} $\mathbf{a}$. \end{enumerate} \end{definition} For a point $\mathbf{a} = ( a_1, a_2, \cdots, a_n )$, we will write the sets $\mathcal{V} ( f, \mathbf{a} )$ and $\mathcal{P} (f, \mathbf{a})$ simply \begin{equation*} \mathcal{V}( \mathbf{a} ) := \mathcal{V}( a_1, a_2, \cdots, a_n ) \qquad \text{and} \qquad \mathcal{P}( \mathbf{a} ) := \mathcal{P}( a_1, a_2, \cdots, a_n ) \end{equation*} when no confusion can arise. Note that $0 \leq \left\vert \mathcal{P}( \mathbf{a} ) \right\vert = 2 \left\vert \mathcal{V}( \mathbf{a} ) \right\vert \leq 2n$. In particular, if $n=2$, the weakly specularly differentiability are equal with the strongly specularly differentiability, while this trait may fail for $n \geq 3$. \begin{example} \label{Ex : not specularly differentiable but specularly partial differentiable} Consider the function $f : \mathbb{R}^{2} \setminus \left\{ (0, 0) \right\} \to \mathbb{R}$ by \begin{equation*} f(x, y) = \frac{x^2}{x^2 + y^2} \end{equation*} for $(x, y) \in \mathbb{R}^{2} \setminus \left\{ (0, 0) \right\}$. Then one can simply calculate that $f[0, 0]_{(1)} = 1$ and $f[0,0]_{(2)} = 0$. Also, since \begin{equation*} \partial^{R}_{x} f(0, 0) = \partial^{L}_{x} f(0, 0) = 0 = \partial^{L}_{y} f(0, 0) = \partial^{R}_{y} f(0, 0), \end{equation*} the phototangents of $f$ at $(0, 0)$ with respect to $x$ and $y$ are the functions $\operatorname{pht}_x f : \mathbb{R}^2_{x}(0, 0) \to \mathbb{R}$ and $\operatorname{pht}_y f : \mathbb{R}^2_{y}(0, 0) \to \mathbb{R}$ defined by \begin{equation*} \operatorname{pht}_x f( \mathbf{y}_1 ) = 1 \qquad \text{and} \qquad \operatorname{pht}_y f( \mathbf{y}_2 ) = 0 \end{equation*} for $\mathbf{y}_1 \in \mathbb{R}_{x}(0, 0)$ and $\mathbf{y}_2 \in \mathbb{R}_{y}(0, 0)$, respectively. Note that $\operatorname{pht}_y f$ is just $y$-axis. Hence, $f$ is specularly partial differentiable at $(0, 0)$ with respect to $x$ and $y$, which means that $\mathcal{V}(0, 0) = \left\{ 1, 2 \right\}$. The definition of specular partial derivatives implies that \begin{equation*} \partial^{S}_x f(0,0) = 0 = \partial^S_y f(0,0). \end{equation*} Observe that \begin{equation*} \mathcal{P} (0, 0) = \left\{ \left( 0, 0, 1 \right), \left( 0, 1, 0 \right), \left( 0, -1, 0 \right) \right\}. \end{equation*} However, $f$ is neither weakly specularly differentiable nor strongly specularly differentiable at $(0, 0)$ since \begin{equation*} f[0, 0]_{(1)} = 1 \neq 0 = f[0, 0]_{(2)} \end{equation*} even if $\left\vert \mathcal{P} (0, 0) \right\vert = 3$. \end{example} \begin{example} \label{Ex : not partial differentiable but specularly partial differentiable} We appropriately employ the variables $x = x_1$ and $y = x_2$. Consider the function $f : \mathbb{R}^{2} \to \mathbb{R}$ defined by \begin{equation*} f(x, y) = \begin{cases} \displaystyle \frac{xy}{\sqrt{x^2 + y^2}} & \text{if } (x, y) \neq (0, 0) ,\\[0.45cm] 1 & \text{if } (x, y) = (0, 0), \end{cases} \end{equation*} as in Figure \ref{Fig : The function specularly partial differentiable but not partial differentiable}. Note that $f$ is not differentiable at $(0, 0)$. Calculating that \begin{equation*} f[0, 0)_{(1)} = f(0, 0]_{(1)} = 0 = f(0, 0]_{(2)} = f[0, 0)_{(2)}, \end{equation*} one can compute that \begin{equation*} \partial^{R}_{x} f (0, 0) = \partial^{L}_{x} f (0, 0) = 0 = \partial^{L}_{y} f (0, 0) = \partial^{R}_{y} f (0, 0). \end{equation*} Then, the phototangents of $f$ at $(0, 0)$ with respect to $x$ and $y$ are $x$-axis and $y$-axis, respectively. Thus, $f$ is specularly partial differentiable at $(0, 0)$ with respect to $x$ and $y$, which implies that $\mathcal{V}(0, 0) = \left\{ 1, 2 \right\}$. Also, one can find that \begin{equation*} \mathcal{P} (0, 0) = \left\{ (1, 0, 0), (-1, 0, 0), (0, 1, 0), (0, -1, 0) \right\}. \end{equation*} Now, we can calculate the specular partial derivatives: \begin{equation*} \partial_{x}^S f (0, 0) = 0 = \partial_{y}^S f (0, 0). \end{equation*} Lastly, since $\left\vert \mathcal{P} (0, 0) \right\vert \geq 3$ and $f[0, 0]_{(1)} = 0 = f[0, 0]_{(2)}$, we conclude that $f$ is specularly differentiable at $(0, 0)$. \begin{figure} \caption{The function specularly partial differentiable but not partial differentiable} \label{Fig : The function specularly partial differentiable but not partial differentiable} \end{figure} \end{example} Now, we generalize the concept of tangent hyperplane for classical derivatives. If $f$ is differentiable at $\mathbf{a}$, many authors define a tangent plane "at $\left( \mathbf{a}, f(\mathbf{a}) \right)$". To accommodate this definition, it needs to devise notation for the point $\left( \mathbf{a}, f(\mathbf{a}) \right)$ in light of specular derivatives and to justify such notation. We start by reinterpreting weak specularly differentiability in light of equivalence relations. Let $f:U \to \mathbb{R}$ be a function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. Let $i$, $j$ be indices of variables in $\mathcal{V}( f, \mathbf{a} )$. Define two indices $i$ and $j$ to be equivalent $\sim$ if $f[\mathbf{a}]_{(i)} = f[\mathbf{a}]_{(j)}$. Now, observe that $f$ is weakly specularly differentiable at $\mathbf{a}$ if and only if there exists $\text{w} \in \mathcal{V}( f, \mathbf{a} )$ such that \begin{equation*} \left\vert \left[ \text{w} \right]_{\sim} \right\vert \geq \frac{n+1}{2}, \end{equation*} where $\left[ \text{w} \right]_{\sim}$ denotes the equivalence class of the index $\text{w}$. Furthermore, $f$ is strongly specularly differentiable at $\mathbf{a}$ if and only if there exists $\text{s} \in \mathcal{V}( f, \mathbf{a} )$ such that \begin{equation*} \left\vert \left[ \text{s} \right]_{\sim} \right\vert =n. \end{equation*} Here, the following statement not only justifies our new notation but also ensures the uniqueness of the point at which $f$ has a tangent hyperplanes in specular derivatives sense (details in Corollary \ref{Crl : The uniqueness of the point at which a function has a wstg}). \begin{proposition} \label{Prop : the index of wstg is unique} If $f$ is weakly specularly differentiable at $\mathbf{a}$, one can choose $\emph{w} \in \mathcal{V}( f, \mathbf{a} )$ such that \begin{equation} \label{Prop : the index of wstg is unique; formula} \left\vert \left[ \emph{w} \right]_{\sim} \right\vert \geq \frac{n+1}{2} \qquad \text{and} \qquad \left\vert \left[ i \right]_{\sim} \right\vert < \frac{n+1}{2} \end{equation} whenever $i \in \mathcal{V}( f, \mathbf{a} ) \setminus \left[ \emph{w} \right]_{\sim}$. \end{proposition} \begin{proof} Let $i \in \mathcal{V}( f, \mathbf{a} )$ be an index such that $i \not\in \left[ \text{w} \right]_{\sim}$. Suppose to contrary that \begin{equation*} \left\vert \left[ i \right]_{\sim} \right\vert \geq \frac{n+1}{2}. \end{equation*} Then we find that \begin{equation*} n \geq \left\vert \left[ \text{w} \right]_{\sim} \right\vert + \left\vert \left[ i \right]_{\sim} \right\vert = n + 1, \end{equation*} which is a contradiction. Hence, we complete the proof. \end{proof} Now, if $f$ is weakly specularly differentiable at $\mathbf{a}$, one can write the point \begin{equation*} \overline{\mathbf{a}}_{({\rm w})} := \left( \mathbf{a}, f[\mathbf{a}]_{({\rm w})} \right), \end{equation*} where ${\rm w} \in \mathcal{V}( f, \mathbf{a} )$ satisfies \eqref{Prop : the index of wstg is unique; formula}. In particular, if $f$ is strongly specularly differentiable, we omit the subscript $(\text{w})$, i.e., \begin{equation*} \overline{\mathbf{a}} := \left( \mathbf{a}, f[\mathbf{a}] \right). \end{equation*} Incidentally, it has to be mentioned that dealing with two functions can lead to confusion about the notations $\overline{\mathbf{a}}_{(\text{w})}$ and $\overline{\mathbf{a}}$ so that we will not use these notations in such a case. \begin{definition} \label{Def : specular tangent hyperplane} Let $f:U\to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. \begin{enumerate}[label=(\roman*)] \rm \item If $f$ is weakly differentiable at $\mathbf{a}$, we define the \emph{weak specular tangent hyperplane} to the graph of $f$ at the point $\overline{\mathbf{a}}_{(\text{w})}$, written by $\operatorname{wstg}f$, to be the hyperplane which touches the point $\overline{\mathbf{a}}_{(\text{w})}$ and is parallel to the hyperplane determined by points in $\mathcal{P} (f, \mathbf{a})$. \rm \item If $f$ has only single weak specular tangent hyperplane to the graph of $f$ at $\overline{\mathbf{a}}_{(\text{w})}$, we call this hyperplane the (\emph{strong}) \emph{specular tangent hyperplane} to the graph of $f$ at the point $\overline{\mathbf{a}}_{(\text{w})}$ and write $\operatorname{stg}f$. \end{enumerate} \end{definition} Here, the aforementioned uniqueness of the point at which a weakly differentiable function has a weak specular tangent hyperplane. \begin{corollary} \label{Crl : The uniqueness of the point at which a function has a wstg} If a function $f:U\to \mathbb{R}$ with an open set $U \subset \mathbb{R}^{n}$ is weakly specularly differentiable at a point $\mathbf{a}$ in $U$, the point at which $f$ has a weak specular tangent hyperplane is unique. \end{corollary} \begin{proof} Assume $f$ has a weak specular tangent hyperplane $\operatorname{wstg}f$ at $\overline{\mathbf{a}}_{({\rm w})}$. Suppose to the contrary that there exists a point $\overline{\mathbf{a}}_{({\rm v})}$ at which $f$ has a weak specular tangent hyperplane $\operatorname{wstg}_{{\rm v}} f$ with $\overline{\mathbf{a}}_{({\rm v})} \neq \overline{\mathbf{a}}_{({\rm w})}$. However, the existence of $\operatorname{wstg}_{{\rm v}} f$ contradicts to Proposition \ref{Prop : the index of wstg is unique}, as required. \end{proof} \begin{remark} If $f$ is strongly specularly differentiable at $\mathbf{a}$, there are up to $_{2n}C_{n+1}$ weak specular tangent hyperplanes, where $_{m}C_{k}$ is the number of combinations of $k$ elements from $m$. \end{remark} Notice that the choice of radius of the sphere $\partial B\left(\overline{\mathbf{a}}_{(i)}, 1\right)$ in \ref{Def : specularly differentiability for multi-variables (b)} of Definition \ref{Def : specularly differentiability for multi-variables} is independent of weak specular tangent hyperplanes. One can modify the radius as an arbitrary positive real number, if necessary, in dealing with weak specular tangent hyperplanes. But we prefer to use the fixed integer $1$ for convenience. As for (c) and (d) in Definition \ref{Def : specularly differentiability for multi-variables}, the reason why the condition that $f[\mathbf{a}]_{(i)} = f[\mathbf{a}]_{(j)}$ for all $i$, $j\in \mathcal{V}( f, \mathbf{a} )$ is reasonable is in defining weak specular tangent hyperplanes. For example, if we drop this condition, the weak specular tangent hyperplane at $\left(0, 0, f[ 0, 0 ]\right)$ in Example \ref{Ex : not specularly differentiable but specularly partial differentiable} has to be the $yz$-plane, but such tangent plane is not acceptable. \begin{remark} Based on Definition \ref{Def : specular tangent hyperplane}, we have the following remark which is the $n$-dimensional version of Remark \ref{rmk : properties of specular tanget line}. If $f$ is strongly specularly differentiable at $\mathbf{a}$ and has a single weak specular tangent hyperplane, the strong specular tangent hyperplane is given by the function $\operatorname{stg}f : U \to \mathbb{R}$ by \begin{equation} \label{Rmk : strong specular tangent hyperplane} \operatorname{stg}f( \mathbf{x} ) = \sum_{i = 1}^n \partial^S_{x_i} f( \mathbf{a} ) \left( x_i - a_i \right) + f[\mathbf{a}] \end{equation} for $\mathbf{x} \in U$, where $\mathbf{a} = ( a_1, a_2, \cdots, a_n )$. Moreover, the specular tangent hyperplane has two properties: $f[\mathbf{a}] = \operatorname{stg}f(\mathbf{a})$ and $\partial^S_{x_i}f(\mathbf{a}) = \partial^S_{x_i} \operatorname{stg} f (\mathbf{a})$ for each $1 \leq i \leq n$. \end{remark} The following functions do not have classical differentiability but have a strong specular tangent hyperplane. \begin{example} Consider the functions $f_1 (x, y) = \left||x|-|y|\right| + |x| + |y|$, $f_2 (x, y) = \left||x|-|y|\right|- |x| + |y|$, and $f_3 (x, y) = \left||x|-|y|\right| - |x| - |y|$ from $\mathbb{R}^{2}$ into $\mathbb{R}$ (see Figure \ref{Fig : Some examples for strong specular tangent hyperplanes in two-dimensions}). Also, let $f$ be the function in Example \ref{Ex : not partial differentiable but specularly partial differentiable}. All these functions are specularly differentiable at $(0, 0)$. Also, each strong specular tangent hyperplane of $f_1$, $f_2$, $f_3$ and $f$ at $(0, 0, 0)$ is same as the $xy$-plane, that is, \begin{equation*} \operatorname{stg}f_1(x, y) = \operatorname{stg}f_2(x, y) = \operatorname{stg}f_3(x, y) = \operatorname{stg}f(x, y) = 0 \end{equation*} for $(x, y) \in \mathbb{R}^{2}$. \begin{figure} \caption{Some examples for strong specular tangent hyperplanes in two-dimensions} \label{Fig : Some examples for strong specular tangent hyperplanes in two-dimensions} \end{figure} \end{example} Here, the following function has nontrivial weak specular tangent hyperplanes. \begin{example} Consider the function $f(x, y) = |x|-|y|- x - y$ for $(x, y) \in \mathbb{R}^{2}$ with the variables $x = x_1$, $y = x_2$ (see Figure \ref{Fig : An example for weak specular tangent hyperplanes in two-dimensions}). The phototangent of $f$ at $(0, 0)$ with respect to $x$ is $\operatorname{pht}_x f : \mathbb{R}^{2}_x(0, 0) \to \mathbb{R}$ by \begin{equation*} \operatorname{pht}_x f ( \mathbf{y}_1 ) = \begin{cases} -2\mathbf{y}_1 \innerprd \mathbf{e}_1 & \text{if } \mathbf{y}_1 \innerprd \mathbf{e}_1 < 0,\\ 0 & \text{if } \mathbf{y}_1 \innerprd \mathbf{e}_1 \geq 0, \end{cases} \end{equation*} for $\mathbf{y}_1 \in \mathbb{R}^2_{x}(0, 0)$ and the phototangent of $f$ at $(0, 0)$ with respect to $y$ is $\operatorname{pht}_y f : \mathbb{R}^{2}_y (0, 0) \to \mathbb{R}$ by \begin{equation*} \operatorname{pht}_y f ( \mathbf{y}_2 ) = \begin{cases} 0 & \text{if } \mathbf{y}_2 \innerprd \mathbf{e}_2 < 0,\\ -2\mathbf{y}_2 \innerprd \mathbf{e}_2 & \text{if } \mathbf{y}_2 \innerprd \mathbf{e}_2 \geq 0, \end{cases} \end{equation*} for $\mathbf{y}_2 \in \mathbb{R}^2_{y}(0, 0)$. Note that $f$ is specularly differentiable at $(0, 0)$ with \begin{equation*} \mathcal{P}(0, 0) = \left\{ (1, 0, 0), \left( - \frac{1}{\sqrt{5}}, 0, \frac{2}{\sqrt{5}} \right), (0, -1, 0), \left( 0, \frac{1}{\sqrt{5}}, -\frac{2}{\sqrt{5}} \right) \right\} =: \left\{ \mathbf{p}_1, \mathbf{p}_2, \mathbf{p}_3, \mathbf{p}_4 \right\}. \end{equation*} For each $i \in \left\{ 1, 2, 3, 4 \right\} $, let $\operatorname{wstg}_i f$ to be the weak specular tangent hyperplane of $f$ determined by the three points $\mathbf{p}_j$, where $j \in \left\{ 1, 2, 3, 4 \right\} \setminus \left\{ i \right\}$. Then we see that \begin{align*} \displaystyle \operatorname{wstg}_1 f (x, y) &= - \left( \frac{9 - \sqrt{5}}{2} \right)x + \left( \frac{1 - \sqrt{5}}{2} \right)y,\\ \displaystyle \operatorname{wstg}_2 f (x, y) &= -\left(\frac{1- \sqrt{5}}{2}\right)x + \left(\frac{1- \sqrt{5}}{2}\right)y,\\ \displaystyle \operatorname{wstg}_3 f (x, y) &= \left( \frac{1 - \sqrt{5}}{2} \right)x - \left( \frac{9 - \sqrt{5}}{2} \right)y,\\ \displaystyle \operatorname{wstg}_4 f (x, y) &= \left( \frac{1 - \sqrt{5}}{2} \right)x - \left( \frac{1 - \sqrt{5}}{2} \right)y \end{align*} for $(x, y) \in \mathbb{R}^{2}$ with the variable $z = x_3$. \end{example} \begin{figure} \caption{An example for weak specular tangent hyperplanes in two-dimensions} \label{Fig : An example for weak specular tangent hyperplanes in two-dimensions} \end{figure} In the specular derivative sense, one can handle weak specular tangent hyperplanes with just a few appropriate variables, not every variable. In other words, there exists a function which has a strong specular tangent hyperplane but is not strongly specularly differentiable. The following function can be exemplified for this property. \begin{example} Consider the function $f : X \to \mathbb{R}$ defined by \begin{equation*} f ( x_1, x_2, x_3 ) = \frac{1}{x_1} + \left\vert x_2 \right\vert + x_3^2 \end{equation*} for $( x_1, x_2, x_3 ) \in X$, where $X = \left\{ (x_1, x_2, x_3) \in \mathbb{R}^{3} : x_1 \neq 0 \right\}$. Write the point $(0, 0, 0)$ as $\mathbf{o}$. Then $\mathcal{V}( \mathbf{o} ) = \left\{ 2, 3 \right\}$ and $f[ \mathbf{o} ]_{(2)} = 0 = f[ \mathbf{o} ]_{(3)}$. Hence, $f$ is weakly specularly differentiable at $\mathbf{o}$ with $f[ \mathbf{o} ]_{(\text{w})} = 0$. One can calculate that \begin{equation*} \mathcal{P}( \mathbf{o} ) = \left\{ \left( 0, \frac{1}{\sqrt{2}}, 0, \frac{1}{\sqrt{2}}\right), \left( 0, -\frac{1}{\sqrt{2}}, 0, \frac{1}{\sqrt{2}}\right), \left( 0, 0, 1, 0\right), \left( 0, 0, -1, 0\right) \right\}. \end{equation*} Since it needs $4$ points to determine a hyperplane in $\mathbb{R}^{4}$, there is a single weak specularly tangent hyperplane to the graph of $f$ determined by the points in $\mathcal{P}( \mathbf{o} )$, that is, $\operatorname{stg}f ( x_1, x_2, x_3 ) = 0$. Consequently, we conclude that $f$ has a strong specular tangent hyperplane at $\overline{\mathbf{o}}_{(\text{w})}$. \end{example} \subsection{The specular gradient and specularly directional derivatives} Naturally, we define the gradient of a function in the specular derivative sense. \begin{definition} Let $f:U \to \mathbb{R}$ be a function, where $U$ is an open subset of $\mathbb{R}^n$. Let $\mathbf{x}=( x_1, x_2, \cdots, x_n )$ denote a typical point of $\mathbb{R}^{n}$. Let $\mathbf{a}$ be a point of $U$. Assume a function $f$ is specularly differentiable at $\mathbf{a}$. We define the \emph{specular gradient} of $f$ to be the vector \begin{equation*} D^S_{\mathbf{x}} f := \left( \frac{\partial f}{\partial^S x_1}, \frac{\partial f}{\partial^S x_2},\cdots, \frac{\partial f}{\partial^S x_n} \right). \end{equation*} Also, the specular gradient of $f$ at $\mathbf{a}$ is \begin{equation*} D^S_{\mathbf{x}} f ( \mathbf{a} ) := \left( \frac{\partial f}{\partial^S x_1}( \mathbf{a} ), \frac{\partial f}{\partial^S x_2}( \mathbf{a} ), \cdots, \frac{\partial f}{\partial^S x_n}( \mathbf{a} ) \right). \end{equation*} \end{definition} When there is no danger of confusion, we write $D^S$ for $D^S_{\mathbf{x}}$. \begin{remark} The notation for the specular gradient allows us to rewrite the function \eqref{Rmk : strong specular tangent hyperplane} as \begin{equation*} \operatorname{stg}f ( \mathbf{x} ) = D^S_{\mathbf{x}}f( \mathbf{a} ) \innerprd ( \mathbf{x} - \mathbf{a}) + f[\mathbf{a}] \end{equation*} for $\mathbf{x} \in U$. \end{remark} We provide a simple example for the specular gradient. \begin{example} Consider the function $f : \mathbb{R}^{n} \to \mathbb{R}$ defined by \begin{equation*} f( \mathbf{x} ) = \left\vert x_1 \right\vert + \left\vert x_2 \right\vert + \cdots + \left\vert x_n \right\vert \end{equation*} for $\mathbf{x} = ( x_1, x_2, \cdots, x_n ) \in \mathbb{R}^{n}$. For each $i = 1$, $2$, $\cdots$, $n$, one can compute that \begin{equation*} \partial^S_{x_i} f ( \mathbf{x} ) = \operatorname{sgn} ( x_i ) \end{equation*} so that we conclude \begin{equation*} D^S f ( \mathbf{x} ) = \left( \operatorname{sgn} ( x_1 ), \operatorname{sgn} ( x_2 ), \cdots, \operatorname{sgn} ( x_n ) \right) \end{equation*} for $\mathbf{x} = ( x_1, x_2, \cdots, x_n ) \in \mathbb{R}^{n}$. \end{example} Inspired by the formula \eqref{Rmk : specularly partial derivatives formula}, we define a directional derivatives in specular derivatives sense with considering Corollary \ref{Crl : The uniqueness of the point at which a function has a wstg}. \begin{definition} Let $f:U\to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. Assume $f$ is specularly differentiable at $\mathbf{a}$. Let $\mathbf{u}\in \mathbb{R}^{n}$ be a unit vector. We define the \emph{specularly directional derivative} of $f$ at $\mathbf{a}$ in the direction of $\mathbf{u}$, denoted by $\partial^S_{\mathbf{u}}f( \mathbf{a} )$, to be \begin{equation} \label{Def : specularly directional derivative} \partial^S_{\mathbf{u}}f( \mathbf{a} ) := \lim_{h \to 0}\frac{g (h) \sqrt{\left( g (h) \right)^2 + h^2} - g (h) \sqrt{\left( g (h) \right)^2 + h^2}}{h \sqrt{\left( g (h) \right)^2 + h^2} + h \sqrt{\left( g (h) \right)^2 + h^2}}, \end{equation} where $g(h) = f( \mathbf{a} + h \mathbf{u} ) - f[\mathbf{a}]$. \end{definition} Writing $\partial^S_{\mathbf{e}_{i}}f = \partial^S_{x_{i}}f$, we can interpret a specularly partial derivative as a special case of specularly directional derivatives. Now, we want to find the relation between specularly directional derivatives and specularly partial derivatives. In classical derivatives sense, the directional derivative of $f$ at $\mathbf{a}$ in the direction of a unit vector $\mathbf{u} \in \mathbb{R}^{n}$, written by $\partial_{\mathbf{u}}f ( \mathbf{a} )$, is equal with the inner product of the gradient of $f$ at $\mathbf{a}$ and $\mathbf{u}$, i.e., \begin{equation} \label{Classical directional derivatives} \partial_{\mathbf{u}}f ( \mathbf{a} ) = D f( \mathbf{a} ) \innerprd \mathbf{u} \end{equation} whenever $f:U \to \mathbb{R}$ is differentiable at $\mathbf{a}$ with an open set $U \in \mathbb{R}^{n}$. Recall the proof for the formula \eqref{Classical directional derivatives} includes the usage of the Chain Rule. However, specular derivatives do not obey the Chain Rule as in Remark \ref{Rmk : Specular derivatives may do not have linearity} so that it may not be guaranteed that $\partial^S_{\mathbf{u}}f ( \mathbf{a} ) = D^S f( \mathbf{a} ) \innerprd \mathbf{u}$. Moreover, it is not easy to calculate the formula \eqref{Def : specularly directional derivative}. Therefore, we want to find other way to calculate a specularly directional derivative by applying Proposition \ref{Prop : Calculating spd}. We begin with the definition extended from Definition \ref{Def : high-dimensions specularly partial derivatives}. \begin{definition} \label{Def : high-dimensions directional specularly partial derivatives} Let $f:U\to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. Assume $f$ is specularly differentiable at $\mathbf{a}$. Let $\mathbf{u}\in \mathbb{R}^{n}$ be a unit vector. We define the \emph{right specularly directional derivative} and \emph{left specularly directional derivative} the at $\mathbf{a}$ in the direction of $\mathbf{u}$ to be the limit \begin{equation*} \partial^{R}_{\mathbf{u}}f(\mathbf{a}):= \lim_{h \searrow 0}\frac{f(\mathbf{a} + h \mathbf{u})-f[\mathbf{a}]}{h} \qquad \text{and} \qquad \partial^{L}_{\mathbf{u}} f(\mathbf{a}):= \lim_{h \nearrow 0}\frac{f(\mathbf{a} + h \mathbf{u})- f [\mathbf{a}]}{h}, \end{equation*} respectively, as a real number. Furthermore, we define the \emph{right specular gradient} and \emph{left specular gradient} of $f$ to be the vector \begin{equation*} D^R_{\mathbf{x}} f := \left( \frac{\partial f}{\partial^R x_1}, \frac{\partial f}{\partial^R x_2},\cdots, \frac{\partial f}{\partial^R x_n} \right) \qquad \text{and} \qquad D^L_{\mathbf{x}} f := \left( \frac{\partial f}{\partial^L x_1}, \frac{\partial f}{\partial^L x_2},\cdots, \frac{\partial f}{\partial^L x_n} \right) , \end{equation*} respectively. As before, we simply write $D^R$ and $D^L$ in place of $D^{R}_{\mathbf{x}}$ and $D^{L}_{\mathbf{x}}$, respectively, when there is no possible ambiguity. \end{definition} Here, right and left specularly directional derivative can be calculated by using the right and left specular gradient as the way familiar to us. \begin{proposition} \label{Prop : Calculating right and left specularly directional derivatives} Let $f:U\to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. If $f$ is specularly differentiable at $\mathbf{a}$, then \begin{equation*} \partial^R_{\mathbf{u}} f ( \mathbf{a} ) = D^R f ( \mathbf{a} ) \innerprd \mathbf{u} \qquad \text{and} \qquad \partial^L_{\mathbf{u}} f ( \mathbf{a} ) = D^L f ( \mathbf{a} ) \innerprd \mathbf{u} . \end{equation*} \end{proposition} \begin{proof} Without loss of generality, we prove $\partial^R_{\mathbf{u}} f ( \mathbf{a} ) = D^R f ( \mathbf{a} ) \innerprd \mathbf{u}$. Consider a new function $A : \mathbb{R} \to \mathbb{R}$ of a single variable by \begin{equation*} A( \mathbf{a} + t \mathbf{u} ) = \begin{cases} f( \mathbf{a} + t \mathbf{u} ) & \text{if } t > 0,\\ f[\mathbf{a}] & \text{if } t = 0, \\ g( \mathbf{a} + t \mathbf{u} ) & \text{if } t < 0, \end{cases} \end{equation*} where the function $g$ is chosen so that $A$ is differentiable at $t = 0$, using the Whitney Extension Theorem (see \cite{1934_Whitney}). Also, consider other new function $F$ of a single variable by \begin{equation*} F(t) = A ( \mathbf{a} + t \mathbf{u} ). \end{equation*} Then, Definition \ref{Def : high-dimensions directional specularly partial derivatives} implies that \begin{equation*} \partial^R_{\mathbf{u}} f( \mathbf{a} ) = \lim_{t \searrow 0}\frac{f(\mathbf{a} + t \mathbf{u}) - f[\mathbf{a}]}{t} = \lim_{t \searrow 0}\frac{A(\mathbf{a} + t \mathbf{u}) - A( \mathbf{a} )}{t} = \lim_{t \to 0}\frac{F(t) - F(0)}{t - 0} = F'(0). \end{equation*} Consider $\mathbf{x}(t) = \mathbf{a} + t \mathbf{u}$. Since $A$ is differentiable at $\mathbf{a}$, we obtain a chance to apply the Chain Rule. Applying the Chain Rule to the right-hand side of the above equation, we see that \begin{equation*} \left. \frac{d}{dt} A( \mathbf{a} + t \mathbf{u} ) \right|_{t=0} = \left. DA( \mathbf{x} ) \innerprd D \mathbf{x} (t) \right|_{t=0} = \left. DA( \mathbf{x} ) \innerprd \mathbf{u} \right|_{t=0} = DA( \mathbf{a} ) \innerprd \mathbf{u} . \end{equation*} Since $D^R A ( \mathbf{a} ) = D^R f( \mathbf{a} )$, we have \begin{equation*} D^R A( \mathbf{a} ) \innerprd \mathbf{u} = D^R f( \mathbf{a} ) \innerprd \mathbf{u} . \end{equation*} Hence, we conclude that $\partial^R_{\mathbf{u}} f( \mathbf{a} ) = D^{R}f( \mathbf{a} ) \innerprd \mathbf{u}$, as desired. \end{proof} Here, we state the calculation of specularly directional derivatives and the condition when a specularly directional derivative is zero. \begin{corollary} \label{Crl : extending calculation for direcrional spd in Rn} Under the hypothesis of \emph{Proposition \ref{Prop : Calculating right and left specularly directional derivatives}}, the following statements hold: \begin{enumerate}[label=(\roman*)] \rm\item \emph{The specularly directional derivative $\partial^{S}_{\mathbf{u}} f ( \mathbf{a} )$ exists for all unit vectors $\mathbf{u} \in \mathbb{R}^{n}$.} \label{Crl : extending calculation for direcrional spd in Rn - 1} \rm\item \emph{It holds that \begin{equation*} \partial^S_{\mathbf{u}} f ( \mathbf{a} ) = \begin{cases} \displaystyle \frac{\alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)}}{\alpha+\beta} & \text{if } \alpha+\beta\neq 0,\\[0.45cm] 0 & \text{if } \alpha+\beta = 0, \end{cases} \end{equation*} where $\alpha := D^R f ( \mathbf{a} ) \innerprd \mathbf{u} $ and $\beta := D^L f ( \mathbf{a} ) \innerprd \mathbf{u}$.} \label{Crl : extending calculation for direcrional spd in Rn - 2} \rm\item \emph{$\partial^S_{\mathbf{u}} f ( \mathbf{a} ) = 0$ if and only if $\left( D^R f ( \mathbf{a} ) + D^L f ( \mathbf{a} ) \right) \innerprd \mathbf{u} = 0$.} \label{Crl : extending calculation for direcrional spd in Rn - 3} \rm\item \emph{Signs of $\partial^S_{\mathbf{u}} f ( \mathbf{a} )$ and $\left( D^R f ( \mathbf{a} ) + D^L f ( \mathbf{a} ) \right) \innerprd \mathbf{u}$ are equal, i.e., $\operatorname{sgn}\left( \partial^S_{\mathbf{u}} f ( \mathbf{a} ) \right) = \operatorname{sgn}\left( \left( D^R f ( \mathbf{a} ) + D^L f ( \mathbf{a} ) \right) \innerprd \mathbf{u} \right)$.} \label{Crl : extending calculation for direcrional spd in Rn - 4} \end{enumerate} \end{corollary} \begin{proof} The application of Proposition \ref{Prop : Calculating spd} and Proposition \ref{Prop : Calculating right and left specularly directional derivatives} yields the statements \ref{Crl : extending calculation for direcrional spd in Rn - 1} and \ref{Crl : extending calculation for direcrional spd in Rn - 2}. Next, the statements \ref{Crl : extending calculation for direcrional spd in Rn - 3} and \ref{Crl : extending calculation for direcrional spd in Rn - 4} can be proved by applying \ref{Lem : the function A - 1} and \ref{Lem : the function A - 2} in Lemma \ref{Lem : the function A} (see Appendix \ref{Lem : the function A for calculation of spd}). \end{proof} Now, we estimate specularly directional derivatives and find the condition when they have the maximum and the minimum. \begin{theorem} \label{Thm : estimate of the specularly directional derivative} Let $f:U\to \mathbb{R}$ be a multi-variable function with an open set $U \subset \mathbb{R}^{n}$ and let $\mathbf{a}$ be a point in $U$. If $f$ is specularly differentiable at $\mathbf{a}$, then the following statements hold: \begin{enumerate}[label=(\roman*)] \rm\item It holds that \begin{equation*} - \frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2} \leq \partial^S_{\mathbf{u}} f ( \mathbf{a} ) \leq \frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2}. \end{equation*} \label{Thm : estimate of the specularly directional derivative - 1} \rm\item The specularly directional derivative $\partial^S_{\mathbf{u}} f ( \mathbf{a} )$ is maximized with respect to direction when $\mathbf{u}$ points in the same direction as $D^{R}f ( \mathbf{u} )$ and $D^{L}f ( \mathbf{u} )$, and is minimized with respect to direction when $\mathbf{u}$ points in the opposite direction as $D^{R}f ( \mathbf{u} )$ and $D^{L}f ( \mathbf{u} )$. \label{Thm : estimate of the specularly directional derivative - 2} \rm\item Furthermore, the maximum and minimum values of $\partial^S_{\mathbf{u}}f ( \mathbf{a} )$ are \begin{equation*} \frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2} \qquad \text{and} \qquad - \frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2}, \end{equation*} respectively. \label{Thm : estimate of the specularly directional derivative - 3} \end{enumerate} \end{theorem} \begin{proof} In terms of \ref{Crl : extending calculation for direcrional spd in Rn - 2} in Corollary \ref{Crl : extending calculation for direcrional spd in Rn}, one can find that \begin{equation*} \alpha = D^R f ( \mathbf{a} ) \innerprd \mathbf{u} = \left\| D^R f ( \mathbf{a} ) \right\| \left\| \mathbf{u} \right\| \cos \theta_1 = \left\| D^R f ( \mathbf{a} ) \right\| \cos \theta_1 \end{equation*} and \begin{equation*} \beta = D^L f ( \mathbf{a} ) \innerprd \mathbf{u} = \left\| D^L f ( \mathbf{a} ) \right\| \left\| \mathbf{u} \right\| \cos \theta_2 = \left\| D^L f ( \mathbf{a} ) \right\| \cos \theta_2 , \end{equation*} where $\theta_1$ is the angle between the unit vector $\mathbf{u}$ and the right specular gradient $D^R f ( \mathbf{a} )$ and $\theta_2$ is the angle between the unit vector $\mathbf{u}$ and the left specular gradient $D^L f ( \mathbf{a} )$. Applying Lemma \ref{Lem : the function A} and the triangle inequality yields that \begin{align*} \left\vert \partial^S_{\mathbf{u}} f ( \mathbf{a} ) \right\vert &\leq \frac{1}{2} \left(\left\vert D^R f ( \mathbf{a} ) \innerprd \mathbf{u} + D^L f ( \mathbf{a} ) \innerprd \mathbf{u} \right\vert \right)\\ & \leq \frac{1}{2} \left( \left\vert D^R f ( \mathbf{a} ) \innerprd \mathbf{u} \right\vert + \left\vert D^L f ( \mathbf{a} ) \innerprd \mathbf{u} \right\vert \right)\\ & =\frac{1}{2} \left( \left\| D^R f ( \mathbf{a} ) \right\| \left\vert \cos \theta_1 \right\vert + \left\| D^L f ( \mathbf{a} ) \right\| \left\vert \cos \theta_2 \right\vert \right) , \end{align*} namely \begin{equation*} - \frac{\left\| D^R f ( \mathbf{a} ) \right\| \left\vert \cos \theta_1 \right\vert + \left\| D^L f ( \mathbf{a} ) \right\| \left\vert \cos \theta_2 \right\vert}{2} \leq \partial^S_{\mathbf{u}} f ( \mathbf{a} ) \leq \frac{\left\| D^R f ( \mathbf{a} ) \right\| \left\vert \cos \theta_1 \right\vert + \left\| D^L f ( \mathbf{a} ) \right\| \left\vert \cos \theta_2 \right\vert}{2}. \end{equation*} Now, first assume $\theta_1 = 0 = \theta_2$. Then \ref{Crl : extending calculation for direcrional spd in Rn - 4} in Corollary \ref{Crl : extending calculation for direcrional spd in Rn} asserts $D^R f ( \mathbf{a} ) \innerprd \mathbf{u} + D^L f ( \mathbf{a} ) \innerprd \mathbf{u} \geq 0$ so that \begin{equation*} 0 \leq \partial^S_{\mathbf{u}} f ( \mathbf{a} ) \leq \frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2}. \end{equation*} Thus, $\partial^S_{\mathbf{u}} f ( \mathbf{a} )$ has the maximum, with respect to $\mathbf{u}$, that \begin{equation*} \max_{\mathbf{u} \in \mathbb{R}^{n},~\left\| \mathbf{u} \right\| = 1} \partial^S_{\mathbf{u}} f ( \mathbf{a} ) = \frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2} \end{equation*} when $\mathbf{u}$ points in the same direction as $D^{R}f ( \mathbf{u} )$ and $D^{L}f ( \mathbf{u} )$. As the same way, one can show that $\theta_1 = \pi = \theta_2$ implies that $\partial^S_{\mathbf{u}} f ( \mathbf{a} )$ has the minimum, with respect to $\mathbf{u}$, that \begin{equation*} \min_{\mathbf{u} \in \mathbb{R}^{n},~\left\| \mathbf{u} \right\| = 1} \partial^S_{\mathbf{u}} f ( \mathbf{a} ) = -\frac{\left\| D^R f ( \mathbf{a} ) \right\| + \left\| D^L f ( \mathbf{a} ) \right\|}{2} \end{equation*} when $\mathbf{u}$ points in the opposite direction as $D^{R}f ( \mathbf{u} )$ and $D^{L}f ( \mathbf{u} )$. \end{proof} \section{Differential equations with specular derivatives} In this section, we construct differential equations with specular derivatives and solve them. Recall a piecewise continuous function is continuous at each point in the domain except at finitely many points at which the function has a jump discontinuity. Note that a piecewise continuous function on a closed interval is continuous at the end points of the closed interval. \begin{definition} Let $f:[a, b] \to \mathbb{R}$ be a piecewise continuous function, where $[a, b]$ is a closed interval in $\mathbb{R}$. In this paper, the \emph{singular set} of $f$ is defined to be the set of all points $s_1$, $s_2$, $\cdots$, $s_k$ at which $f$ has a jump discontinuity such that $s_1 < s_2 < \cdots< s_k$. We call elements of the singular set \emph{singular points}. \end{definition} Let $f : [a, b] \to \mathbb{R}$ be a generalized Riemann integral function on the closed interval $[a, b] \subset \mathbb{R}$ and $F : [a, b] \to \mathbb{R}$ be the indefinite integral of $f$ defined by \begin{equation*} F(x) := \int_{a}^{x} f(t) ~dt \end{equation*} for $x \in [a, b]$. Then the Fundamental Theorem of Calculus (FTC for short) asserts the following properties: \begin{enumerate}[label=(\roman*)] \rm\item The indefinite integral $F$ is continuous on $[a, b]$. \rm\item There exists a null set $\mathcal{N}$ such that if $x \in [a, b] \setminus \mathcal{N}$, then $F$ is differentiable at $x$ and $F'(x) = f(x)$. \rm\item If $f$ is continuous at $x_0 \in [a, b]$, then $F'( x_0 ) = f( x_0 )$. \end{enumerate} This statement is the second form, whereas the first form of FTC is stated without the indefinite integral as in \cite{2011_Bartle_BOOK}. \subsection{The Fundamental Theorem of Calculus with specular derivatives} The goal of this subsection is to define an indefinite integral $F$ of a piecewise continuous function $f$ and to find the relationship between $F^{\spd}$ and $f$. We take the first step with an example for a familiar function. \begin{example} \label{EX : the sign function} Consider the sign function $\operatorname{sgn}(x)$ for $x \in [-1, 1]$ (see Figure \ref{Fig : FTC with specular derivatives for the sign function}). Note that the sign function is piecewise continuous. Our hope is to find a continuous function $F:[-1, 1] \to \mathbb{R}$ such that \begin{equation*} \frac{d}{d^S x} F(x) = f(x) \end{equation*} for $x \in [-1, 1]$. First off, define the functions $\overline{f_0} : [-1, 0] \to \mathbb{R}$ and $\overline{f_1} : [0, 1] \to \mathbb{R}$ by \begin{equation*} \overline{f_0}(x_0) = \begin{cases} \operatorname{sgn}[-1) & \text{if } x_0 = -1 ,\\ \operatorname{sgn}(x_0) & \text{if } x_0 \in (-1, 0) ,\\ \operatorname{sgn}(0] & \text{if } x_0 = 0 \end{cases} = -1 \qquad \text{and} \qquad \overline{f_1}(x_1) = \begin{cases} \operatorname{sgn}[0) & \text{if } x_1 =0,\\ \operatorname{sgn}(x_1) & \text{if } x_1 \in (0, 1), \\ \operatorname{sgn}(1] & \text{if } x_1 = 1 \end{cases} = 1, \end{equation*} respectively. Write the indefinite integrals of $\overline{f_0}$ and $\overline{f_1}$: \begin{equation*} F_0(x_0) = \int_{-1}^{x_0} \overline{f_0} (t)~dt = \int_{-1}^{x_0} -1 ~dt = -x_0 - 1 \qquad \text{and} \qquad F_1(x_1) = \int_{0}^{x_1} \overline{f_1}(t) ~dt = \int_{0}^{x_1} 1 ~dt = x_1 \end{equation*} for $x_0 \in [-1, 0]$ and $x_1 \in [0, 1]$. Now, define the function $F : [-1, 1] \to \mathbb{R}$ by \begin{equation*} F(x) = \begin{cases} F_0(x) & \text{if } x\in [-1, 0),\\ F_1(x) + C_1 & \text{if } x\in [0, 1] \end{cases} = \begin{cases} -x -1 & \text{if } x\in [-1, 0),\\ x + C_1 & \text{if } x\in [0, 1], \end{cases} \end{equation*} for some constant $C_1 \in \mathbb{R}$. We want to find $C_1$ so that $F$ is continuous on $[-1, 1]$ and $F^{\spd}(x)=\operatorname{sgn}(x)$ for all $x \in [-1, 1]$. Since $\overline{f_0}$ is continuous on $[-1, 0]$ and $\overline{f_1}$ is continuous $[0, 1]$, FTC asserts that $F_0$ is continuous on $[-1, 0]$ with $F_0'(x_0)=\overline{f_0}(x_0)$ for all $x_0 \in [-1, 0]$ as well as $F_1$ is continuous on $[0, 1]$ with $F_1'(x_1)=\overline{f_1}(x_1)$ for all $x_1 \in [0, 1]$. Then we have $F(x)$ is continuous and $F'(x) = \operatorname{sgn}(x)$ for all $x \in [-1,1]\setminus \left\{ 0 \right\}$. Moreover, Proposition \ref{Prop : Calculating spd} yields that $F^{\spd}(0)=\operatorname{sgn}(0)$ since $F^{\spd}_-(0) = -1$ and $F^{\spd}(0) = 1$. Hence, we have $F^{\spd}(x) \equiv \operatorname{sgn}(x)$ thanks to Theorem \ref{Thm : ordinary dervatives and specular derivatives}. Finally, it remains to prove that $F$ is continuous at $0$. The constant $C_1$ has to satisfy the equation $F(0] = F(0) + C_1$, i.e., \begin{equation*} \lim_{x\nearrow 0} ( -x -1 ) = C_1. \end{equation*} Then $C_1 = -1$. Hence, we see that $F(x) = |x| - 1$ for $x \in [-1, 1]$. Consequently, ignoring the constant, the function $F$ is continuous on $[-1, 1]$ and \begin{equation*} \frac{d}{d^S x} \left(\int_{-1}^x \operatorname{sgn}(t)~dt\right) = \frac{d}{d^S x} \left\vert x \right\vert = \operatorname{sgn}(x) \end{equation*} for $x \in [-1, 1]$. \end{example} \begin{figure} \caption{FTC with specular derivatives for the sign function} \label{Fig : FTC with specular derivatives for the sign function} \end{figure} Motivated by the previous example, we define the indefinite integral of a piecewise continuous function. \begin{definition} \label{Def : the indefinite integral} Suppose $f:[a, b] \to \mathbb{R}$ be a piecewise continuous function. Let $\left\{ s_1, s_2, \cdots, s_k \right\}$ to be the singular set of $f$. Define $s_0 := a$ and $s_{k+1}:=b$. Denote index sets by $\mathcal{I}:=\left\{ 0, 1, \cdots, k \right\}$. For each $i \in \mathcal{I}$, define the function $\overline{f_i}:\left[ s_i, s_{i + 1} \right] \to \mathbb{R}$ to be the \emph{extended} \emph{function} of $f$ on $( s_i, s_{i+1} )$ by \begin{equation*} \overline{f_i}( x_i ) := \begin{cases} f[s_i) & \text{if } x_i = s_i,\\ f(x_i) & \text{if } x_i \in ( s_i, s_{i+1} ),\\ f(s_{i+1}] & \text{if } x_i = s_{i+1}. \end{cases} \end{equation*} We define the \emph{indefinite} \emph{integral} $F$ of $f$ by \begin{equation*} F(x) := \begin{cases} \displaystyle \int_{a}^{x} \overline{f_0}(t)~dt & \text{if } x \in \left[a, s_1\right],\\[0.45cm] \displaystyle \int_{s_1}^{x} \overline{f_1}(t)~dt + \int_{s_0}^{s_1}\overline{f_0}(t)~dt & \text{if } x \in \left(s_1, s_2\right],\\[0.45cm] \displaystyle \int_{s_2}^{x} \overline{f_2}(t)~dt + \sum_{\ell=1}^{2} \int_{s_{\ell-1}}^{s_\ell} \overline{f_{\ell-1}}(t)~dt & \text{if } x \in \left(s_2, s_3\right],\\ \qquad \qquad \qquad \vdots \\ \displaystyle \int_{s_k}^{x} \overline{f_k}(t)~dt + \sum_{\ell=1}^{k} \int_{s_{\ell-1}}^{s_\ell} \overline{f_{\ell-1}}(t)~dt & \text{if } x \in \left(s_k, s_{k+1}\right]. \end{cases} \end{equation*} \end{definition} The our goal is to suggest and prove the relation, so-called FTC with specular derivatives, between a piecewise continuous function $f$ and specular derivatives $F^{\spd}$ of indefinite integrals $F$ of $f$, that is, $F$ is continuous and \begin{equation} \label{Hope : FTC with specular derivatives} \frac{d}{d^S x} F(x) = f(x) \end{equation} for $x \in [a, b]$. To achieve this, it needs to examine proper conditions of $f$. Consider the piecewise continuous function $g : [-1, 1] \to \mathbb{R}$ defined by \begin{equation*} g(x) = \begin{cases} -1 & \text{if } x \in [-1, 0],\\ 1 & \text{if } x \in (0, 1]. \end{cases} \end{equation*} Our hope \eqref{Hope : FTC with specular derivatives} fails in light of Proposition \ref{Prop : Calculating spd}. If the indefinite integral $G$ of $g$ is specularly differentiable at $0$, then $g(0)=0$ according to Proposition \ref{Prop : Calculating spd}. In other words, $G$ is not specularly differentiable at $0$ since $g(0)=1$. Hence, it is reasonable to assume the following hypothesis for a piecewise continuous function $f:[a,b] \to \mathbb{R}$ in stating FTC with specular derivatives: \begin{enumerate}[label=(H\arabic*), ref=(H\arabic*), start=1] \rm\item\label{H1} For each point $x \in (a, b)$, the property \begin{equation*} f(x) = \begin{cases} \displaystyle \frac{\alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)}}{\alpha+\beta} & \text{if } \alpha+\beta\neq 0,\\ 0 & \text{if } \alpha+\beta=0, \end{cases} \end{equation*} holds, where $\alpha := f[x)$ and $\beta := f(x]$. \end{enumerate} As for \ref{H1}, it suffices to check only the points at which $f$ has a jump discontinuity. Also, one can assume the following hypotheses instead of \ref{H1} resulted from Lemma \ref{Lmm : average of angle}: \begin{enumerate}[label=(H\arabic*), ref=(H\arabic*), start=2] \rm\item\label{H2} For each point $x \in (a, b)$, the property \begin{equation*} f(x) = \tan\left( \frac{\arctan\alpha + \arctan\beta}{2} \right) \end{equation*} holds, where $\alpha := f[x)$ and $\beta := f(x]$. \end{enumerate} However, we prefer to assume \ref{H1}. Before stating FTC with specular derivatives, we give an example with a simple periodic function in order to figure out our strategy of the proof. \begin{example} \label{Ex : periodic function} For a fixed $k \in \mathbb{N}$, let $p:[0, k + 1] \to \mathbb{R}$ be the periodic function defined by \begin{equation*} p(x) = \begin{cases} 2x & \text{if } x \in [0, 1),\\ \displaystyle \frac{-1+\sqrt{5}}{2} & \text{if } x=1,\\[4pt] p(x-1) & \text{if } x \in (1, k + 1),\\ 2 & \text{if } x=k + 1, \end{cases} \end{equation*} which is illustrated in Figure \ref{Fig : FTC with specular derivatives for the periodic function}. Note that $p$ meets \ref{H1}. Denote the index sets by $\mathcal{I} := \left\{ 0, 1, \cdots, k \right\}$ and $\mathcal{J} := \mathcal{I} \setminus \left\{ 0 \right\}$. For each $i \in \mathcal{I}$, the extended function $\overline{f_i} : [i, i+1] \to \mathbb{R}$ is defined by \begin{equation*} \overline{f_i}( x_i ) = \begin{cases} p[i) & \text{if } x_i = i,\\ p( x_i ) & \text{if } x_i \in (i, i+1),\\ p(i+1] & \text{if } x_i = i+1, \end{cases} = 2( x_i - i ) \end{equation*} and the indefinite integral of $\overline{f_i}$ is defined by \begin{equation*} F_i( x_i ) = \int_{i}^{x_i} \overline{f_i}(t) ~dt = \int_{i}^{x_i} 2(t - i) ~dt = \left[ t^2 - 2it \right]_{t=i}^{x_i} = ( x_i - i )^{2} \end{equation*} for $x_i \in [i, i+1]$. Now, observe that the function $F : [0, k+1] \to \mathbb{R}$ defined by \begin{equation*} F(x) = \begin{cases} x^{2} & \text{if } x \in [0, 1 ],\\ ( x - 1 )^{2} + 1 & \text{if } x \in ( 1, 2 ],\\ ( x - 2 )^{2} + 2 & \text{if } x \in ( 2, 3 ],\\ \qquad \qquad \qquad \vdots \\ ( x - k )^{2} + k & \text{if } x \in ( k, k+1 ], \end{cases} \end{equation*} is the indefinite integral $F$ of $p$. For each $i \in \mathcal{I}$, since $\overline{f_i}$ is continuous on $[i, i + 1]$, FTC asserts that $F_i$ is continuous on $[i, i + 1]$ with $F_i'( x_i )=\overline{f_i}( x_i )$ for all $x_i \in [i, i + 1]$. Then we have $F(x)$ is continuous and $F'(x)=p(x)$ for all $x \in [0, k+1] \setminus \mathcal{J}$. Moreover, for each $j \in \mathcal{J}$ one can calculate \begin{equation*} F^{\spd}_-(j) = \left. 2 [ x - ( j-1 ) ] \right|_{x=j} = 2 \qquad \text{and} \qquad F^{\spd}_+(j) = \left. 2 ( x - j ) \right|_{x=j} = 0 \end{equation*} so that \begin{equation*} F^{\spd}(j) = \frac{-1 + \sqrt{5}}{2} = p(j) \end{equation*} by using Proposition \ref{Prop : Calculating spd}. Hence, we have $F^{\spd}(x)\equiv p(x)$ owing to Theorem \ref{Thm : ordinary dervatives and specular derivatives}. It suffices to prove that $F$ is continuous on $\mathcal{J}$. Indeed, for each $j \in \mathcal{J}$, observe that \begin{equation*} F(j) = \left[ j - (j - 1) \right]^2 + (j - 1) = j = \lim_{x \searrow j} (x - j)^2 + j = \lim_{x \searrow j} F(x). \end{equation*} which means that $F$ is continuous at $x=j$. Consequently, the indefinite integral $F$ is continuous on $[0, k+1]$ and \begin{equation*} \frac{d}{d^S x} \left( \int_{0}^{x} p(t) ~dt \right) = \frac{d}{d^S x} F(x) = p(x) \end{equation*} for $x \in [0, k+1]$. \end{example} \begin{figure} \caption{FTC with specular derivatives for the periodic function} \label{Fig : FTC with specular derivatives for the periodic function} \end{figure} Here is the connection between the notions of the specular derivative and the integral. \begin{theorem} \label{Thm : FTC with specular derivatives} \emph{(The Fundamental Theorem of Calculus with specular derivatives)} Suppose $f:[a, b] \to \mathbb{R}$ be a piecewise continuous function. Assume ${\rm \ref{H1}}$. Let $F$ be the indefinite integral of $f$. Then the following properties hold: \begin{enumerate}[label=(\roman*)] \rm\item $F$ \emph{is continuous on} $[a, b]$. \rm\item $F^{\spd}(x) = f(x)$ \emph{for all} $x \in [a, b]$. \end{enumerate} \end{theorem} \begin{proof} Denote the singular set of $f$ by \begin{equation*} \mathcal{S} := \left\{ s_1, s_2, \cdots, s_k \right\} \end{equation*} with the index sets $\mathcal{I} := \left\{ 0, 1, \cdots, k \right\}$ and $\mathcal{J} := \mathcal{I} \setminus \left\{ 0 \right\}$. For each $i \in \mathcal{I}$, the extended function $\overline{f_i} : \left[ s_i, s_{i+1} \right] \to \mathbb{R}$ is defined by \begin{equation*} \overline{f_i}( x_i ) = \begin{cases} f[ s_i ) & \text{if } x_i = s_i,\\ f( x_i ) & \text{if } x_i \in (s_i, s_{i+1}),\\ f(s_{i+1}] & \text{if } x_i = s_{i+1}, \end{cases} \end{equation*} and the indefinite integral of $\overline{f_i}$ is defined by \begin{equation*} F_i( x_i ) = \int_{s_i}^{x_i} \overline{f_i}(t) ~dt \end{equation*} for $x_i \in \left[ s_i, s_{i+1} \right]$. Now, one can find that the function $F : [a, b] \to \mathbb{R}$ defined by \begin{equation*} F(x) := \begin{cases} F_0(x) & \text{if } x \in [a, s_1],\\ F_1(x) + F_0(s_1) & \text{if } x \in (s_1, s_2],\\ \displaystyle F_2(x) + \sum_{\ell=1}^{2} F_{\ell-1}(s_\ell) & \text{if } x \in (s_2, s_3],\\ \qquad \qquad \qquad \vdots \\ \displaystyle F_k(x) + \sum_{\ell=1}^{k} F_{\ell-1}(s_\ell) & \text{if } x \in (s_k, s_{k+1}], \end{cases} \end{equation*} is the indefinite integral $F$ of $f$. For each $i \in \mathcal{I}$, since $\overline{f_i}$ is continuous on $\left[ s_i, s_{i+1} \right]$, FTC asserts that $F_i$ is continuous on $\left[ s_i, s_{i+1} \right]$ with $F_i'( x_i )=\overline{f_i}( x_i )$ for all $x_i \in \left[ s_i, s_{i+1} \right]$. Then $F(x)$ is continuous and $F'(x)=f(x)$ for all $x \in [a, b] \setminus \mathcal{S}$. Moreover, for each $j \in \mathcal{J}$ one can calculate \begin{equation*} F^{\spd}_ - (s_j) = \left. \frac{d}{d^{L}x} \left( F_{j-1}(x) + \sum_{\ell=1}^{j-1} F_{\ell-1}( s_{\ell} ) \right) \right|_{x=s_j} = \left. \frac{d}{d^{L}x} F_{j-1}(x) \right|_{x=s_j} = \left. \frac{d}{d^{L}x} \overline{f_{j-1}}(x) \right|_{x=s_j} = f^{\spd}_- ( s_j ) \end{equation*} and \begin{equation*} F^{\spd}_ + (s_j) = \left. \frac{d}{d^{R}x} \left( F_{j}(x) + \sum_{\ell=1}^{j} F_{\ell-1}( s_{\ell} ) \right) \right|_{x=s_j} = \left. \frac{d}{d^{R}x} F_{j}(x) \right|_{x=s_j} = \left. \frac{d}{d^{R}x} \overline{f_{j}}(x) \right|_{x=s_j} = f^{\spd}_+ ( s_j ), \end{equation*} which implies that $F^{\spd}(s) = f(s)$ for all $s \in \mathcal{S}$ by the assumption \ref{H1}. Hence, we have $F^{\spd}\equiv f$ owing to Theorem \ref{Thm : ordinary dervatives and specular derivatives}. Lastly, it is enough to prove that $F$ is continuous on $\mathcal{S}$. To show this, for each $j \in \mathcal{J}$, observe that \begin{align*} F( s_j ) &= \int_{s_{j-1}}^{s_j} \overline{f_{j-1}}(t)~dt + \sum_{\ell=1}^{j-1} \int_{s_{\ell-1}}^{s_\ell} \overline{f_{\ell-1}}(t)~dt \\ &= \sum_{\ell=1}^{j} \int_{s_{\ell-1}}^{s_\ell} \overline{f_{\ell-1}}(t)~dt \\ &= \lim_{x \searrow s_j} \int_{s_j}^{x} \overline{f_j}(t)~dt + \sum_{\ell=1}^{j} \int_{s_{\ell-1}}^{s_\ell} \overline{f_{\ell-1}}(t)~dt \\ &= \lim_{x \searrow s_j} F(x), \end{align*} which means that $F$ is continuous at $x=s_j$. In conclusion, the indefinite integral $F$ is continuous on $[a, b]$ and \begin{equation*} \frac{d}{d^S x} \left( \int_{a}^{x} f(t) ~dt \right) = \frac{d}{d^S x} F(x) = f(x) \end{equation*} for $x \in [a, b]$. Consequently, we complete the proof. \end{proof} \subsection{Ordinary differential equations with specular derivatives} In this subsection, we deal with an ordinary differential equation (ODE for short) in the specular derivative sense. Let $U$ be an open set in $\mathbb{R}$ and $x$ be an arbitrary point in $U$. Consider a \emph{first order ordinary differential equation} with specular derivatives, that is, \begin{equation} \label{1st ODE with specular derivatives} \frac{du}{d^{S}x} = f(u, x), \end{equation} where $f:U \to \mathbb{R}$ is a given function of two variables and a function $u:U\to \mathbb{R}$ is unknown. We call $u$ is a \emph{solution} of a first order ODE with specular derivatives \eqref{1st ODE with specular derivatives} if $u$ is continuous on $U$ and satisfies \eqref{1st ODE with specular derivatives}. In this paper, we study the general \emph{first order linear ordinary differential equation} with specular derivatives of the form \begin{equation*} a_1(x) u^{\spd} + a_0(x) u = g(x), \end{equation*} where continuous functions $a_1:U\to \mathbb{R} \setminus \left\{ 0 \right\}$, $a_2:U \to \mathbb{R}$ and a piecewise function $g:U \to \mathbb{R}$ with the singular set that all singular points are undefined are given. We usually write the \emph{standard form} of a first order linear ODE with specular derivatives: \begin{equation} \label{the general 1st ODE with specular derivatives} u^{\spd} + p(x) u = f(x), \end{equation} where a continuous function $p:U\to \mathbb{R}$ and a piecewise continuous function $f:U \to \mathbb{R}$ such that all singular points are unknown are given. As usual, we say that a first order linear ODE with specular derivatives \eqref{the general 1st ODE with specular derivatives} is either \emph{homogeneous} if $f \equiv 0$ or \emph{non-homogeneous} if $f \not\equiv 0$. Especially, we say that \emph{solving} a first order linear ODE with specular derivatives is to obtain not only the solution $u$ but also the singular set of $f$. Here is the reason why we consider the piecewise continuous function not a continuous function. According to Theorem \ref{Thm : continuity of specular derivatives}, if $f$ is continuous on $U$, then the first order linear ODE with specular derivatives is equal with the first order linear ODE with classical derivatives. Indeed, we are not interested in a homogeneous first order linear ODE with specular derivatives. Thus, we assume that $f$ is piecewise continuous function when it comes to a first order linear ODE with specular derivatives. Furthermore, in order to agree with the unknown singular set, note that singular points may directly affect the solutions. We explain the reason later (see Remark \ref{Rmk : the singular set and ODE}). Recall first how to solve the \emph{first order linear ordinary differential equation} with classical derivatives. Let us consider the ODE: \begin{equation} \label{ODE : first order linear ordinary differential equation with classical derivatives} u' + p(x) u = f(x), \end{equation} where the functions $p : \mathbb{R} \to \mathbb{R}$ and $f : \mathbb{R} \to \mathbb{R}$ are given, and the function $u : \mathbb{R} \to \mathbb{R}$ is the unknown. The function $\mu( x )$ such that \begin{equation*} \frac{d}{dx} ( \mu u ) = p(x) \mu(x) \end{equation*} is called the \emph{integrating factor}. Using the integrating factor and the Chain Rule yield the general solution of \eqref{ODE : first order linear ordinary differential equation with classical derivatives} is \begin{equation*} u = \frac{1}{\mu(x)}\left(\int \mu(t) f(t) dt + C\right) \end{equation*} for some constant $C \in \mathbb{R}$ if the functions $f$ and $p$ are continuous. Otherwise, if $f$ or $p$ is a piecewise continuous function, one can find the way to solve the ODE by separating the given domain or by applying the Laplace transform in \cite{2017_Boyce_BOOK} and \cite{2018_Zill_BOOK}. As in Remark \ref{Rmk : Specular derivatives may do not have linearity}, one cannot generally solve \eqref{the general 1st ODE with specular derivatives} by using the function $\mu(x)$, so-called an \emph{integrating factor}, since \begin{equation*} \frac{d}{d^{S}x} (\mu u ) = \frac{d\mu}{d^{S}x}u + \mu \frac{du}{d^{S}x} \end{equation*} may fail. See the following example. \begin{example} Consider the functions $u:(0, 2) \to \mathbb{R}$ and $f:(0, 2) \to \mathbb{R}$ defined by \begin{equation*} u(x)= \begin{cases} 0 & \text{if } x \in (0, 1],\\ x - 1 & \text{if } x \in ( 1, 2 ), \end{cases} \qquad \text{and} \qquad f(x) = \begin{cases} 0 & \text{if } x \in (0, 1),\\ x & \text{if } x \in ( 1, 2 ), \end{cases} \end{equation*} respectively. Observe that the function $u$ and the singular point $f(1) = -1 + \sqrt{2}$ solve the first order linear ODE with specular derivatives: \begin{equation*} u^{\spd} + u = f(x) \end{equation*} for $x \in (0, 2)$. However, one can calculate \begin{equation*} \left.\frac{d}{d^{S}x} \left( e^x u \right) \right|_{x=1} = \frac{-1 + \sqrt{e^2 + 1}}{e} \neq e(-1 + \sqrt{2}) = \left. e^x u + e^x \frac{du}{d^{S}x} \right|_{x=1}, \end{equation*} using Proposition \ref{Prop : Calculating spd}. Hence, we have to find out other avenue not using an integrating factor in obtaining the general solution. \end{example} Now, we start with a restricted type of non-homogeneous first order linear ODE with specular derivatives: \begin{equation} \label{A 1st non-homogeneous linear ODE with specular derivatives} u^{\spd} + c u = f(x), \end{equation} where a constant $c \in \mathbb{R}$ and a piecewise continuous function $f:U \to \mathbb{R}$ such that all singular points are unknown are given. We want to obtain the unknown continuous function $u : U \to \mathbb{R}$ and the singular set of $f$. Here is the several steps to solve the equation: First, we separate the given equation based on the points at which $f$ has a jump discontinuity. Next, by Proposition \ref{Prop: continuity of specular derivatives weak version}, the separated equations are first order linear ODE with classical derivatives can be solved as usual. Third, the solutions are matched so that $u$ is continuous at which $f$ has a jump discontinuity by finding proper constants. Finally, the singular set of $f$ can be found by applying Proposition \ref{Prop : Calculating spd} or Lemma \ref{Lmm : average of angle}. \begin{example} \label{Ex : ReLU function ODE} Let $f:\mathbb{R} \to \mathbb{R}$ be the function defined by \begin{equation*} f(x) = \begin{cases} 0 & \text{if } x < 0,\\ 3x + 1 & \text{if } x > 0. \end{cases} \end{equation*} Consider the first order linear ODE with specular derivatives: \begin{equation} \label{ODE : ReLU function} u^{\spd} + 3u = f(x) \end{equation} for $x \in \mathbb{R}$. We want to obtain the solution $u$ and the value $f(0)$. We first solve the equation separately for $x < 0$ and $x > 0$: \vspace*{-0.5em} \begin{equation*} u' + 3u = \begin{cases} 0 & \text{if } x < 0,\\ 3x + 1 & \text{if } x > 0, \end{cases} \end{equation*} by Proposition \ref{Prop: continuity of specular derivatives weak version}. Using the intergrating factor $\mu(x) = e^{3x}$, the solutions are \begin{equation*} u(x) = \begin{cases} C_1 e^{-3x} & \text{if } x < 0,\\ x + C_2 e^{-3x} & \text{if } x > 0, \end{cases} \end{equation*} for some constants $C_1$ and $C_2$ in $\mathbb{R}$, respectively. In order to match the two solutions so that $u$ is continuous at $0$, the calculation \begin{equation*} u(0] = \lim_{x \nearrow 0} C_1 e^{-3x} = C_1 = C_2 = \lim_{x \searrow 0} \left( x + C_2 e^{-3x} \right) = u[0) \end{equation*} yields $u(0)=C_1 = C_2 =:C$. Hence, we obtain the solution of the given ODE with specular derivatives \eqref{ODE : ReLU function} \begin{equation*} u(x) = \begin{cases} Ce^{-3x} & \text{if } x < 0,\\ C & \text{if } x = 0,\\ x+Ce^{-3x} & \text{if } x > 0, \end{cases} \end{equation*} for some constant $C \in \mathbb{R}$. Also, the singular point is \begin{align*} f(0) = \begin{cases} \displaystyle \frac{9 C^{2} + 1 -\sqrt{\left( 9C^2 +1 \right)\left( 9C^2 - 6C +2 \right)}}{6C-1} & \displaystyle \text{if } C \neq \frac{1}{6},\\[0.45cm] \displaystyle \frac{1}{2} & \displaystyle \text{if } C = \frac{1}{6}, \end{cases} \end{align*} by Proposition \ref{Prop : Calculating spd}. Note that the solution is the ReLU function with $f(0)= -1 + \sqrt{2}$ if $C=0$. \end{example} We explain the delayed motivation for the given piecewise continuous function $f$ which has the undefined singular set. \begin{remark} \label{Rmk : the singular set and ODE} The reason why we assume that every singular point of $f$ is undefined is in that the singular points affect not only the existence and but also the uniqueness of the solution. Consider Example \ref{Ex : ReLU function ODE}. Write the function $\varphi:\mathbb{R} \to \mathbb{R}$ defined by $\varphi(C) = u^{\spd}(0) + 3C - f(0)$ for $C \in \mathbb{R}$, i.e., \begin{equation*} \varphi(C) = \begin{cases} \displaystyle \frac{9 C^{2} + 1 -\sqrt{\left( 9C^2 +1 \right)\left( 9C^2 - 6C +2 \right)}}{6C-1} - f(0)& \displaystyle \text{if } C \neq \frac{1}{6},\\[0.45cm] \displaystyle \frac{1}{2} - f(0) & \displaystyle \text{if } C = \frac{1}{6}. \end{cases} \end{equation*} On the one hand, assume $f(0) = -1 + \sqrt{2}$ is given. Then the equation $\varphi(C) = 0$ has two solutions $C = 0$ and $C= -\frac{2}{3}$. Hence, the solutions of the given ODE \eqref{ODE : ReLU function} are only \begin{equation*} u_1(x) = \begin{cases} 0 & \text{if } x \leq 0,\\ x & \text{if } x > 0, \end{cases} \qquad \text{and} \qquad u_2(x) = \begin{cases} \displaystyle -\frac{2}{3}e^{-3x}& \text{if } x \leq 0,\\[0.45cm] \displaystyle x -\frac{2}{3} e^{-3x}& \text{if } x > 0. \end{cases} \end{equation*} On the other hand, assume $f(0) = 0$ is given. Then the equation $\varphi(C) = 0$ has no solution. Hence, there is no solution of the given ODE \eqref{ODE : ReLU function}. In conclusion, to achieve well-posedness concerning the existence and the uniqueness, the piecewise continuous function $f$ has to be undefined on the singular set unless each singular point is the given value. \end{remark} To solve a first order linear ODE with specular derivatives \eqref{the general 1st ODE with specular derivatives} is akin to the way solving ODE with specular derivatives \eqref{A 1st non-homogeneous linear ODE with specular derivatives}. Hence, we suggest the following theorem without any examples. \begin{theorem} \label{Thm : 1st linear ODE existence} The non-homogeneous first order linear ODE with specular derivatives \eqref{the general 1st ODE with specular derivatives} has a solution. \end{theorem} \begin{proof} Denote the singular set of $f$ by \begin{equation*} \mathcal{S} := \left\{ s_1, s_2, \cdots, s_k \right\} \end{equation*} with the index sets $\mathcal{I} := \left\{ 0, 1, \cdots, k \right\}$ and $\mathcal{J} := \mathcal{I} \setminus \left\{ 0 \right\}$. Denote the end points of the open set $U=(a, b)$ by $s_0 := a$ and $s_{k+1} := b$ (possibly $a = -\infty$ or $b = \infty$). Since $p$ is continuous on $U$, there exists an integrating factor \begin{equation*} \mu ( x ) = \exp \left( \int p(x) ~dx \right) \end{equation*} such that $\mu'(x) = p(x)\mu(x)$ for all $x \in U$. Then the function \begin{equation*} u(x) = \begin{cases} \displaystyle \frac{1}{\mu( x )} \left( \int_{s_0}^{x} \mu(t) f(t)~d t + C_0 \right) & \text{if } x \in ( s_0, s_1 ),\\[0.45cm] \displaystyle \frac{1}{\mu( x )} \left( \int_{s_1}^{x} \mu(t) f(t)~d t + C_1 \right) & \text{if } x \in ( s_1, s_2 ),\\[0.45cm] \qquad \qquad \qquad \vdots & \\[0.45cm] \displaystyle \frac{1}{\mu( x )} \left( \int_{s_k}^{x} \mu(t) f(t)~d t + C_k \right) & \text{if } x \in ( s_k, s_{k+1} ), \end{cases} \end{equation*} is the solutions of the separated equations. For each $j \in \mathcal{J}$, to achieve that $u$ is continuous at $s_j$, we calculate \begin{align*} u( s_j ] = \frac{1}{\mu( s_j )} \left(\int_{s_{j-1}}^{s_j} \mu(t) f(t)~d t + C_{j-1} \right) = \frac{1}{\mu( s_j )} C_j = \frac{1}{\mu( s_j )} \left( \int_{s_j}^{s_{j}} \mu(t) f(t)~d t + C_j \right) = u[ s_j ), \end{align*} which implies that \begin{equation} \label{Thm : 1st linear ODE existence constants} C_j = \int_{s_{j-1}}^{s_j} \mu( t ) f(t)~d t + C_{j-1} \end{equation} defined inductively. Hence, for each $j \in \mathcal{J}$, $u$ is continuous at $x=s_j$ as well as $u( s_j ) = C_j$. Since for each $j \in \mathcal{J}$ there exist $u'_+( s_j )$ and $u'_-( s_j )$, one can compute the value $f( s_j )$ by using Proposition \ref{Prop : Calculating spd} or Lemma \ref{Lmm : average of angle}. Consequently, we conclude that $u$ is a solution of the given ODE with specular derivatives \eqref{the general 1st ODE with specular derivatives}. \end{proof} \begin{corollary} The non-homogeneous first order linear ODE with specular derivatives \eqref{the general 1st ODE with specular derivatives} with the given value at a single point $x_0 \in U$ has the unique solution. \end{corollary} \begin{proof} Assume $s_1$, $s_2$, $\cdots$, $s_k$ are elements of the singular set of $f$. Denote the end points of the open set $U=(a, b)$ by $s_0 := a$ and $s_{k+1} := b$ (possibly $a = -\infty$ or $b = \infty$). Assume the value $u( x_0 ) = y_0$ is given. Then $x_0 \in ( s_i, s_{i+1} )$ for some $i \in \left\{ 0, 1, \cdots, k \right\}$. In the proof of Theorem \ref{Thm : 1st linear ODE existence}, the given value determines the constant $C_i$ as the fixed real number. The undetermined constants are inductively determined thanks to \eqref{Thm : 1st linear ODE existence constants}. Clearly, the all constants are unique, which implies the solution of the ODE with specular derivatives \eqref{the general 1st ODE with specular derivatives} is unique. \end{proof} One can weaken the continuous function $p$ to be piecewise continuous. In this case, the given domain has to be more complicatedly separated. \subsection{Partial differential equations with specular derivatives} In this subsection, we address a partial differential equation (PDE for short) in light of specular derivatives. Consider the PDE called the \emph{transport equation} with specular derivatives: \begin{equation} \label{PDE : transport equation with specuular derivatives} \partial^S_t u + \mathbf{b} \innerprd D^S_{\mathbf{x}} u = c \chi_{\left\{ \mathbf{x} = t\mathbf{b} \right\}} \quad \text{in }\mathbb{R}^{n} \times ( 0, \infty ), \end{equation} where the vector $\mathbf{b} \in \mathbb{R}^{n}$ and the constant $c \in \mathbb{R}$ are given, the function $u : \mathbb{R}^{n} \times [ 0, \infty ) \to \mathbb{R}$ is the unknown with $u = u(\mathbf{x}, t) = u(x_1, \cdots, x_n, t)$, and $\chi_{A} : \mathbb{R}^{n} \times ( 0, \infty ) \to \left\{ 0, 1 \right\}$ is the characteristic function of a subset $A \subset \mathbb{R}^n \times ( 0, \infty )$ defined by \begin{equation*} \chi_{A} ( \mathbf{x}, t ) = \begin{cases} 1 & \text{if } ( \mathbf{x}, t ) \in A,\\ 0 & \text{if } ( \mathbf{x}, t ) \notin A. \end{cases} \end{equation*} Here the spatial variable $\mathbf{x} = ( x_1, x_2, \cdots, x_n ) \in \mathbb{R}^{n}$ and the time variable $t \geq 0$ denote a typical point in space and time, respectively. From now on, we deal with the PDE \eqref{PDE : transport equation with specuular derivatives} with the certain initial-value: the generalized ReLU function. Here, we state the initial-value problem with specular derivatives: \begin{equation} \label{PDE : the initial-value problem for nD} \begin{cases} \begin{aligned} \displaystyle \partial^S_t u + \mathbf{b} \innerprd D^S_{\mathbf{x}} u &= c \chi_{\left\{ \mathbf{x} = t\mathbf{b} \right\}}, & &\text{in } \mathbb{R}^n \times ( 0, \infty ), \\ u &= g & &\text{on }\mathbb{R}^n \times \left\{ t = 0 \right\}, \end{aligned} \end{cases} \end{equation} where the function $g : \mathbb{R}^n \to \mathbb{R}$ defined by \begin{equation} \label{PDE : the initial condition for nD} g( \mathbf{x} ) = \begin{cases} a_1 ( x_1 + x_2 + \cdots + x_n ) & \text{if } x_1 + x_2 + \cdots + x_n \geq 0,\\ a_2 ( x_1 + x_2 + \cdots + x_n ) & \text{if } x_1 + x_2 + \cdots + x_n < 0, \end{cases} \end{equation} is given with fixed constants $a_1$, $a_2 \in \mathbb{R}$. Here $\mathbf{x} = ( x_1, x_2, \cdots, x_n ) \in \mathbb{R}^{n}$ denotes a typical point in space, and $t \geq 0$ denotes a typical time. Recall the PDE, so-called the \emph{transport equation}, with constant coefficients and the initial-value: \begin{equation} \label{PDE : transport equation with classical derivatives} \begin{cases} \begin{aligned} \displaystyle u_t + \mathbf{b} \innerprd D_{\mathbf{x}}u &= f & &\text{in } \mathbb{R}^n \times ( 0, \infty ), \\ u &= g & &\text{on }\mathbb{R}^n \times \left\{ t = 0 \right\}, \end{aligned} \end{cases} \end{equation} where the vector $\mathbf{b} \in \mathbb{R}^{n}$ and the functions $f : \mathbb{R}^n \times ( 0, \infty ) \to \mathbb{R}$, $g : \mathbb{R}^n \to \mathbb{R}$ are given, and the function $u : \mathbb{R}^{n} \times [ 0, \infty ) \to \mathbb{R}$ is the unknown with $u = u(\mathbf{x}, t) = u(x_1, x_2, \cdots, x_n, t)$. Here $\mathbf{x} = ( x_1, x_2, \cdots, x_n ) \in \mathbb{R}^{n}$ and $t \geq 0$ denotes a typical point in space and a typical time, respectively. The solution of the initial value-problem \eqref{PDE : transport equation with classical derivatives} is \begin{equation*} u(\mathbf{x}, t) = g(\mathbf{x}-t \mathbf{b})+\int_{0}^{t} f(\mathbf{x} + (s - t) \mathbf{b}, s) d s \end{equation*} for $\mathbf{x} \in \mathbb{R}^{n}$ and $t \geq 0$. The Chain Rule is used in solving the PDE \eqref{PDE : transport equation with classical derivatives}. The detailed explanation is in \cite{2010_Evans_BOOK}. First of all, we try to solve the equation for a one-dimensional spatial variable, that is, \begin{equation} \label{PDE : the initial-value problem for 1D} \begin{cases} \begin{aligned} \displaystyle \partial^S_t u +b \partial^S_x u &= c \chi_{\left\{ x=tb \right\}} & &\text{in } \mathbb{R} \times ( 0, \infty ), \\ u &= g & &\text{on }\mathbb{R} \times \left\{ t = 0 \right\}, \end{aligned} \end{cases} \end{equation} where constants $b$, $c \in \mathbb{R}$ are given, the function $u : \mathbb{R} \times [ 0, \infty ) \to \mathbb{R}$ is the unknown with $u = u ( x, t )$, the $\chi_A$ is the characteristic function of a set $A \subset \mathbb{R}\times ( 0, \infty )$, and the function $g : \mathbb{R} \to \mathbb{R}$ defined by \begin{equation} \label{PDE : the initial condition for 1D} g( x ) = \begin{cases} a_1x & \text{if } x \geq 0,\\ a_2x & \text{if } x < 0, \end{cases} \end{equation} is given with fixed constants $a_1$, $a_2 \in \mathbb{R}$. Here $x \in \mathbb{R}$ denotes a typical point in space, and $t \geq 0$ denotes a typical time. The above setting can be illustrated as follows. \begin{figure} \caption{The setting for the initial-value problem with specular derivatives} \label{Fig : The setting for the initial-value problem with specular derivatives} \end{figure} Note that the function $g$ can be regarded as the generalized ReLU function by substituting zero in place of $a_2$. When we solve the transport equation with classical derivatives, the Chain Rule plays a crucial role. However, as for specular derivatives, the Chain Rule may fail (see Remark \ref{Rmk : Specular derivatives may do not have linearity}). If $x = tb$, as we solve the transport equation in the classical derivative sense, one can find the incomplete solution of \eqref{PDE : the initial-value problem for 1D} is \begin{equation*} u(x, t) = \begin{cases} a_1 ( x - tb ) & \text{if } x > tb,\\ a_2 ( x - tb ) & \text{if } x < tb . \end{cases} \end{equation*} Now, assume $x = tb$. Then we calculate that \begin{equation*} \partial^R_t u = -a_1 b,~ \partial^L_t u = -a_2 b \qquad \text{and} \qquad \partial^R_x u = a_1,~ \partial^L_x u = a_2. \end{equation*} Applying Proposition \ref{Prop : Calculating spd}, one can compute that \begin{equation*} \partial^S_t u = 0 = \partial^S_x u \end{equation*} if $a_1 + a_2 = 0$ as well as \begin{equation*} \partial^S_t u = \frac{a_1 a_2 b^2 - 1 + \sqrt{\left( a_1^2 b^2 + 1 \right)\left( a_2^2 b^2 + 1 \right)}}{ -( a_1 + a_2 )b} \qquad \text{and} \qquad \partial^S_x u = \frac{a_1 a_2 - 1 + \sqrt{\left( a_1^2 + 1 \right)\left( a_2^2 + 1 \right)}}{a_1 + a_2} \end{equation*} if $a_1 + a_2 \neq 0$. In any cases, the equality $\partial^S_t u + b \partial^S_x u =c$ must be held. Therefore, we deduce a solution of \eqref{PDE : the initial-value problem for 1D} exists provided $c$ satisfies the equality \begin{equation} \label{PDE : the initial-value problem for 1D existence} c = \begin{cases} \displaystyle \frac{1-\sqrt{\left( a_1^2 b^2 + 1 \right)\left( a_2^2 b^2 + 1 \right)}-b^2\left( 1- \sqrt{\left( a_1^2 + 1 \right)\left( a_2^2 + 1 \right)} \right)}{( a_1 + a_2 )b} & \text{if } a_1 + a_2 \neq 0,\\[0.45cm] 0 & \text{if } a_1 + a_2 = 0. \end{cases} \end{equation} Consequently, since the values of the $u(x, t)$ on $\left\{ x = tb \right\}$ do not affect specular derivatives on $\left\{ x = tb \right\}$, a solution of \eqref{PDE : the initial-value problem for 1D} is \begin{equation*} u(x, t) = \begin{cases} a_1 ( x - tb ) & \text{if } x \geq tb,\\ a_2 ( x - tb ) & \text{if } x < tb. \end{cases} \end{equation*} For instance, consider the initial-value problem \begin{equation} \label{PDE : transport equation with specuular derivatives example} \begin{cases} \begin{aligned} \displaystyle 10 \partial^S_t u + 20 \partial^S_x u &= \left( 4\sqrt{34} -5\sqrt{13} -3 \right) \chi_{\left\{ x=2t \right\}} & &\text{in } \mathbb{R} \times ( 0, \infty ), \\ u &= g & &\text{on }\mathbb{R} \times \left\{ t = 0 \right\}, \end{aligned} \end{cases} \end{equation} where \begin{equation*} g(x) = \begin{cases} 4x & \text{if } x \geq 0,\\ x & \text{if } x < 0. \end{cases} \end{equation*} Then the solution of \eqref{PDE : transport equation with specuular derivatives example} exists since $\displaystyle c = \frac{1}{10} \left( 4\sqrt{34} -5\sqrt{13} -3 \right)$ satisfies the equation \eqref{PDE : the initial-value problem for 1D existence} and is \begin{equation*} u(x, t) = \begin{cases} 4( x - 2t ) & \text{if } x \geq 2t,\\ x - 2t & \text{if } x < 2t. \end{cases} \end{equation*} As for high-dimensional spatial variables, the initial-value problem \eqref{PDE : the initial-value problem for nD} has a solution provided the equation \begin{equation*} c = \begin{cases} \displaystyle \frac{a_1 a_2 b^2 - 1 + \sqrt{\left( a_1^2 b^2 + 1 \right)\left( a_2^2 b^2 + 1 \right)}}{-( a_1 + a_2 )b} + b^n n \left( \frac{a_1 a_2 - 1 + \sqrt{\left( a_1^2 + 1 \right)\left( a_2^2 + 1 \right)}}{a_1 + a_2} \right) & \text{if } a_1 + a_2 \neq 0,\\[0.45cm] 0 & \text{if } a_1 + a_2 = 0, \end{cases} \end{equation*} holds and the solution is \begin{equation*} u(\mathbf{x}, t) = \begin{cases} a_1 ( \mathbf{x} - tb ) & \text{if } \mathbf{x} \geq tb,\\ a_2 ( \mathbf{x} - tb ) & \text{if } \mathbf{x} < tb. \end{cases} \end{equation*} \section{Appendix} \subsection{Delayed proofs} In this subsection, we provide the proof of Proposition \ref{Prop : Calculating spd}, Corollary \ref{Crl : Calculating spd}, and Lemma \ref{Lmm : average of angle}. \subsubsection{Proof of Proposition \ref{Prop : Calculating spd}} \label{Prop : Calculating spd proof} Note that specular derivatives do not depend on the translation or radius of circles in the definition. \begin{proof} Without loss of generality, suppose $f[0]=0$. Write $f^{\spd}_{+}(0)=:\alpha_0$ and $f^{\spd}_{-}(0)=:\beta_0$. From \eqref{x of the intersection between the ball and pht}, we get \begin{equation*} a_0 = \frac{r}{\sqrt{\alpha_0^2 + 1}} \qquad \text{and} \qquad b_0 = - \frac{r}{\sqrt{\beta_0^2 + 1}} \end{equation*} as the roots between the circle centered at the origin with radius $r>0$ and $\operatorname{pht}f$ at $x=0$. Denote the intersection points of the unit circle centered at the origin and $\operatorname{pht}f$ at $x=0$ by $\text{A}=(a_0, \operatorname{pht}f(a_0))$ and $\text{B}=(b_0, \operatorname{pht}f(b_0))$, i.e., \begin{equation*} \text{A} = \left(\frac{r}{\sqrt{\alpha_0^2 + 1}}, \frac{r\alpha_0}{\sqrt{\alpha_0^2 + 1}} \right) \qquad \text{and} \qquad \text{B} = \left( -\frac{r}{\sqrt{\beta_0^2 + 1}}, -\frac{r\beta_0}{\sqrt{\beta_0^2 + 1}} \right). \end{equation*} Since $f^{\spd}(0)$ is equal with the slope of the line AB, we find that \begin{align*} f^{\spd}(0) &= \frac{\operatorname{pht}f(a_0)-\operatorname{pht}f(b_0)}{a_0 - b_0} \\ &=\left( \frac{\beta_0 \sqrt{\alpha_0^2 + 1}+\alpha_0 \sqrt{\beta_0^2 + 1}}{\sqrt{\left(\alpha_0^2 +1\right)\left(\beta_0^2 + 1\right)}}\right) \left( \frac{{\sqrt{\left(\alpha_0^2 +1\right)\left(\beta_0^2 + 1\right)}}}{\sqrt{\alpha_0^2 + 1}+\sqrt{\beta_0^2 + 1}} \right) \\ &= \frac{\beta_0 \sqrt{\alpha_0^2 + 1}+\alpha_0 \sqrt{\beta_0^2 + 1}}{\sqrt{\alpha_0^2 + 1}+\sqrt{\beta_0^2 + 1}}. \end{align*} If $\alpha_0 + \beta_0 = 0$, it is obvious that $f^{\spd}(0)=0$. Now, assume $\alpha_0 + \beta_0 \neq 0$. Then, we calculate that \begin{align*} f^{\spd}(0) &= \frac{\left(\beta_0 \sqrt{\alpha_0^2 + 1}+\alpha_0 \sqrt{\beta_0^2 + 1}\right)\left( \sqrt{\alpha_0^2 + 1}-\sqrt{\beta_0^2 + 1} \right)}{\alpha_0^2 -\beta_0^2 } \\ &= \frac{\beta_0 \left(\alpha_0^2 + 1\right)-\alpha_0 \left(\beta_0^2 + 1\right) + \left(\alpha_0 - \beta_0\right)\sqrt{\left(\alpha_0^2 + 1\right)\left(\beta_0^2 + 1\right)} }{\alpha_0^2 -\beta_0^2 } \\ &= \frac{\alpha_0 \beta_0 \left(\alpha_0 -\beta_0\right)-\left(\alpha_0 -\beta_0\right)+ \left(\alpha_0 - \beta_0\right)\sqrt{\left(\alpha_0^2 + 1\right)\left(\beta_0^2 + 1\right)} }{\left(\alpha_0 -\beta_0\right)\left(\alpha_0 +\beta_0\right) } \\ &= \frac{\alpha_0 \beta_0 -1+ \sqrt{\left(\alpha_0^2 + 1\right)\left(\beta_0^2 + 1\right)} }{\alpha_0 +\beta_0}, \end{align*} as required. \end{proof} \subsubsection{Useful lemma} \label{Lem : the function A for calculation of spd} Let us introduce the temporary notation for the function $A: U \to \mathbb{R}$ defined by \begin{equation*} A(\alpha, \beta) := \frac{\alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)}}{\alpha+\beta} \end{equation*} for $( \alpha, \beta ) \in U $, which comes from Proposition \ref{Prop : Calculating spd}, where the domain of the function $A$ is \begin{equation*} U = \left\{ ( \alpha, \beta ) \in \mathbb{R} \times \mathbb{R} : \alpha + \beta \neq 0 \right\}. \end{equation*} Analysis for this function can be useful in proving various properties of specular derivatives: Corollary \ref{Crl : Calculating spd}, Corollary \ref{Crl : extending calculation for direcrional spd in Rn}, and Theorem \ref{Thm : estimate of the specularly directional derivative}. \begin{lemma} \label{Lem : the function A} For every $( \alpha, \beta ) \in U$, the following statements hold: \begin{enumerate}[label=(\roman*)] \rm\item $A(\alpha, \beta) \neq 0$. \label{Lem : the function A - 1} \rm\item \emph{Signs of $\alpha + \beta$ and $A(\alpha, \beta)$ are equal, i.e., $\operatorname{sgn}(\alpha + \beta) = \operatorname{sgn}(A(\alpha, \beta))$.} \label{Lem : the function A - 2} \rm\item $\displaystyle - \frac{\left\vert \alpha + \beta \right\vert}{2} \leq A(\alpha, \beta) \leq \frac{\left\vert \alpha + \beta \right\vert}{2}$. \label{Lem : the function A - 3} \end{enumerate} \end{lemma} \begin{proof} First of all, we prove \ref{Lem : the function A - 1}. Assume $( \alpha, \beta ) \in U$. Let $\alpha$, $\beta$ be real numbers with $\alpha + \beta \neq 0$. Suppose to the contrary that $A(\alpha, \beta) = 0$. Calculating that \begin{equation*} \left( \alpha^2 + 1 \right) \left( \beta^2 + 1 \right) = 1 + \alpha^2 \beta^2 - 2\alpha \beta, \end{equation*} we find that $\alpha + \beta = 0$, which implies that $( \alpha, \beta ) \notin U$, a contradiction. Hence, we conclude that $A( \alpha, \beta ) \neq 0$. Next, to show \ref{Lem : the function A - 2} and \ref{Lem : the function A - 3}, let $( \alpha, \beta )$ be an element of the domain $U$. The application of the Arithmetic Mean-Geometric Mean Inequality for $\alpha^2 + 1 > 0$ and $\beta^2 + 1 > 0$ implies that \begin{equation*} \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)} \leq \frac{\alpha^2 + \beta^2}{2} + 1 \end{equation*} and then we have \begin{equation} \label{Lem : the function A - AM-GM} \alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)} \leq \frac{( \alpha + \beta )^2}{2}. \end{equation} Moreover, note that $( \alpha + \beta )^2 \geq 0$ since $( \alpha, \beta ) \in U$. This inequality implies that $\alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)} \geq 0$. Combining with \eqref{Lem : the function A - AM-GM}, we have \begin{equation} \label{Lem : the function A - estimate} 0 \leq \alpha\beta-1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)} \leq \frac{( \alpha + \beta )^2}{2}. \end{equation} Now, dividing the left inequality of \eqref{Lem : the function A - estimate} by $\alpha + \beta$, one can find that \ref{Lem : the function A - 2}. Furthermore, dividing the right inequality of \eqref{Lem : the function A - estimate} by $\left\vert \alpha + \beta \right\vert$, we obtain \begin{equation*} \left\vert A( \alpha, \beta ) \right\vert = \frac{\alpha \beta - 1 + \sqrt{\left(\alpha^2 + 1\right)\left( \beta^2 + 1 \right)}}{\left\vert \alpha + \beta \right\vert} \leq \frac{\left\vert \alpha + \beta \right\vert}{2}, \end{equation*} which yields \ref{Lem : the function A - 3}, completing the proof. \end{proof} \subsubsection{Proof of Lemma \ref{Lmm : average of angle}} \label{Lmm : average of angle proof} \begin{proof} Write $\alpha:= f^{\spd}_+ ( x_0 )$, $\beta:= f^{\spd}_- ( x_0 )$ and $\gamma := f^{\spd} ( x_0 )$. First of all, suppose $\alpha + \beta = 0$. Then Proposition \ref{Prop : Calculating spd} implies that \begin{equation*} \tan \theta = \gamma = 0 = \alpha + \beta = \tan \theta_1 + \tan \theta_2. \end{equation*} On the one hand, $\theta = 0$. On the other hand, observing that \begin{equation*} 0 = \frac{\tan \theta_1 + \tan \theta_2}{1-\tan \theta_1\tan \theta_2} = \tan(\theta_1 + \theta_2), \end{equation*} we conclude that \begin{equation*} \frac{1}{2}( \theta_1 + \theta_2 ) = 0 = \theta, \end{equation*} completing the proof for the case $\alpha + \beta = 0$. Next, assume $\alpha + \beta \neq 0$. Using Proposition \ref{Prop : Calculating spd}, we have \begin{equation*} \gamma = \frac{\alpha \beta - 1 + \sqrt{\left( \alpha^2 + 1 \right)\left( \beta^2 + 1 \right)}}{\alpha + \beta}, \end{equation*} which implies that the second order equation with respect to $\gamma$. \begin{equation*} ( \alpha + \beta ) \gamma^2 + 2 ( 1-\alpha \beta ) \gamma - ( \alpha + \beta ) = 0. \end{equation*} Using this equation, observe that \begin{equation*} \tan (2\theta)=\frac{2 \tan \theta}{1- \tan^2 \theta}=\frac{2 \gamma}{1 - \gamma^2} = \frac{\alpha + \beta}{1- \alpha \beta} = \frac{\tan \theta_1 + \tan \theta_2}{1-\tan \theta_1\tan \theta_2} = \tan(\theta_1 + \theta_2), \end{equation*} which yields $2 \theta = \theta_1 + \theta_2$, as required. \end{proof} \subsection{Notation} \label{Notation} Let $f : \mathbb{R} \to \mathbb{R}$ be a single-variable function and let $x$ denotes a typical point in $\mathbb{R}$. Also, let $g : \mathbb{R}^{n} \to \mathbb{R}$ be a multi-variable function with $n \in \mathbb{N}$ and let $\mathbf{x}=(x_1, \cdots, x_n)$ denotes a typical point in $\mathbb{R}^n$. We denote $x_i$ to indicate the $i$-th component of the vector $\mathbf{x}$. Lastly, let $k$ be a positive integer. In this paper, we employ the following notation: \vspace*{-0.5em} \begin{table}[H] \centering \begin{tabular}{|c|c|c|} \hline & Classical derivative & Specular derivative \\ \hline \multirow{8}{*}{$\mathbb{R} \to \mathbb{R}$} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle f'_+ \quad \text{and} \quad f'_-$ \\[0.45cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle D^R f=D^R_x f = \frac{df}{d^R f}=f^{\spd}_+ \quad \text{and} \quad D^L f=D^L_x f = \frac{df}{d^L f}=f^{\spd}_-$ \\[0.45cm] \end{tabular} \\ \cline{2-3} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle D f = D_x f = \frac{df}{dx}=f^{\prime}=\dot{f}$,\\[0.45cm] $\displaystyle \frac{d^{k}f}{dx^{k}}=f^{(k)}$ \\[0.45cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle D^S f=D^S_{x} f = \frac{df}{d^S x}=f^{\spd}$,\\[0.45cm] $\displaystyle \left(D^S\right)^{k} f = \left( D^S_{x}\right)^{k} f = \frac{d^{k}f}{d^S x^{k}}=f^{[k]}$ \\[0.45cm] \end{tabular} \\ \hline \multirow{16}{*}{$\mathbb{R}^n \to \mathbb{R}$} & \begin{tabular}[c]{@{}c@{}} \\ $-$\\[0.2cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle \partial^R_{x_i}g=\frac{\partial g}{\partial^R x_{i}} \quad \text{and} \quad \partial^L_{x_i}g=\frac{\partial g}{\partial^L x_{i}}$ \\[0.45cm] \end{tabular} \\ \cline{2-3} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle \partial_{x_i} g= \frac{\partial g}{\partial x_i}=g_{x_i}$,\\[0.45cm] $\displaystyle \partial^k_{x_i} g=\frac{\partial^k g}{\partial x^k_i}$ \\[0.45cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle \partial^S_{x_i}g=\frac{\partial g}{\partial^S x_i}$,\\[0.45cm] $\displaystyle \left(\partial^S_{x_i}\right)^k g=\frac{\partial^k g}{\partial^S x^k_i}$ \\[0.45cm] \end{tabular} \\ \cline{2-3} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle \partial_{\mathbf{u}}f = D_{\mathbf{u}} f = \nabla_{\mathbf{u}} f$ \\[0.2cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle \partial^S_{\mathbf{u}}f$\\[0.2cm] \end{tabular} \\ \cline{2-3} & \begin{tabular}[c]{@{}c@{}} \\ $-$\\[0.2cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $\displaystyle D^R f = D_{\mathbf{x}}^R f \quad \text{and} \quad D^L f = D_{\mathbf{x}}^L f$ \\[0.2cm] \end{tabular} \\ \cline{2-3} & \begin{tabular}[c]{@{}c@{}} \\ $Df = D_{\mathbf{x}}f = \nabla f$\\[0.2cm] \end{tabular} & \begin{tabular}[c]{@{}c@{}} \\ $D^Sf = D_{\mathbf{x}}^S f$\\[0.2cm] \end{tabular} \\ \hline \end{tabular} \caption{Notation for classical and specular derivatives} \label{Table : Notation for classical and specular derivatives} \end{table} {} \end{document}
arXiv
Bacterial predator-prey coevolution accelerates genome evolution and selects on virulence-associated prey defences Ramith R. Nair ORCID: orcid.org/0000-0003-2327-38131 na1, Marie Vasse ORCID: orcid.org/0000-0003-4081-66641 na1, Sébastien Wielgoss ORCID: orcid.org/0000-0002-0127-33801, Lei Sun1,2, Yuen-Tsu N. Yu1 & Gregory J. Velicer1 Nature Communications volume 10, Article number: 4301 (2019) Cite this article Coevolution Experimental evolution Molecular evolution Generalist bacterial predators are likely to strongly shape many important ecological and evolutionary features of microbial communities, for example by altering the character and pace of molecular evolution, but investigations of such effects are scarce. Here we report how predator-prey interactions alter the evolution of fitness, genomes and phenotypic diversity in coevolving bacterial communities composed of Myxococcus xanthus as predator and Escherichia coli as prey, relative to single-species controls. We show evidence of reciprocal adaptation and demonstrate accelerated genomic evolution specific to coevolving communities, including the rapid appearance of mutator genotypes. Strong parallel evolution unique to the predator-prey communities occurs in both parties, with predators driving adaptation at two prey traits associated with virulence in bacterial pathogens—mucoidy and the outer-membrane protease OmpT. Our results suggest that generalist predatory bacteria are important determinants of how complex microbial communities and their interaction networks evolve in natural habitats. Predation shapes communities and ecosystems in many ways, including by contributing to resource turnover1,2, driving the abundance and diversity of prey species3,4, inducing the evolution of novel predatory5 and prey-defense traits6,7,8 and indirectly altering interactions of prey with non-predatory species3,9. Sufficiently long periods of repeated interaction between predator and prey lineages can lead to Red Queen coevolution, in which cycles of reciprocal selection alter the biotic selective environment of both parties over time10,11,12,13. In turn, such increased variation of the biotic environment over time is expected to accelerate the pace of adaptive evolution13,14,15 relative to evolution in the absence of interaction. Although predation is most often associated with animals, the microbial world is also rife with highly diverse predators. Protists consume a phylogenetically broad range of bacteria by ingesting whole cells16, with some being highly selective of prey cell size17. Bacterial predators have evolved both generalist and more specialized mechanisms of predation18,19. For example, myxobacteria such as Myxococcus xanthus are present across a broad range of microbial habitats20 and consume a wide diversity of prokaryotic and eukaryotic microbes21,22,23. The more specialized bacterial predator Bdellovibrio bacteriovorus consumes Gram-negative prey by using pili to invade the periplasm and subsequently feed internally on prey–cell contents before causing lysis and the release of predator offspring24. Bacteriophages are obligate parasites (also often referred to as predators) that inject their genetic material into host cells via attachment to specific cell-surface molecules and often have narrow host ranges25. Previous evolution experiments have shown that prey evolution can be influenced by predation from protists26, specialized bacterial predators27 and phage28, and that the spatial distribution of prey can determine how bacterial predatory behaviors evolve29. However, to our knowledge, it remains largely unexplored how predation by generalist bacterial predators shapes the genomic and phenotypic evolution of coevolving predator–prey communities, as well as the evolution of interaction networks within complex microbial communities. Myxococcus xanthus is perhaps best known for its cooperative formation of multicellular fruiting bodies upon starvation20, but as a predator M. xanthus consumes a wide variety of other microbial species with varying degrees of efficiency22,23. For example, M. xanthus predates efficiently on Escherichia coli23, a human-gut symbiont and occasional pathogen of the digestive and urinary tracts and bloodstream30 that can also be isolated from non-host environments including soil31. The precise molecular mechanisms by which M. xanthus cells externally lyse and decompose prey cells remain poorly understood21,32, but can involve secreted antibiotics and extracellular digestive enzymes, some of which may be released from outer-membrane vesicles21,33. It has been proposed that, like fruiting-body development34, predatory killing and lysis of prey cells by M. xanthus involves density-dependent cooperation (and has thus been likened to "wolf-pack" predation)35, but evidence offered in support of this hypothesis is indirect32,36. Here, we test whether and how interaction between M. xanthus as predator and E. coli as prey in co-evolving two-species communities alters genomic and phenotypic evolution relative to single-species populations evolving under the same (or similar) abiotic conditions. To do so, we establish communities in which M. xanthus is trophically dependent on E. coli as its sole direct carbon substrate for growth, while the prey grows on an added monosaccharide (glucose) that M. xanthus is unable to utilize. Glucose is the sole abiotic carbon source provided in the coevolution treatment, in the prey-only control treatment and in one of two predator-only control treatments (Fig. 1a, Supplementary Table 1). Because M. xanthus is predicted to go extinct in the predator-only control with glucose minimal medium, we establish a second predator-only control using the same basal buffered medium as the other treatments but supplemented with casitone, an amino-acid-rich carbon source allowing predator growth in the absence of prey. Replicate communities and populations undergo 25 growth cycles (3.5 days each, see Methods). At the end of each cycle, 1% of each community or population is randomly transferred to a fresh media flask, giving ~6.64 generations of replacement growth per cycle, or ~166 generations over the entire experiment. However, generation numbers may be somewhat increased for prey in coevolving populations due to predation. Fitness patterns indicate adaptation specific to the coevolution treatment. a Coevolved predator-prey communities (12) and control-evolved predator or prey populations (six each) were propagated on minimal medium supplemented with glucose or casitone. The figure was created by the authors. b Fitness of coevolved prey relative to control-evolved prey in direct competition experiments in the presence and absence of predators (n = 12). c Fitness of coevolved relative to ancestral predators in direct competitions in the presence of prey and during growth on casitone in the absence of prey (n = 12). Gray dots are population means, black dots are treatment means and error bars show 95% confidence intervals (t-distribution). Source data are provided as a Source Data file Red Queen coevolution is expected to occur over extended periods in this arena. In this initial time-limited experiment, we find fitness patterns among the coevolved prey and predator lineages indicative of reciprocal adaptation. In addition, coevolved lineages of both predators and prey evolve faster, i.e., they accumulate more mutations, compared to control lineages evolved in isolation. We further identify strong signatures of parallel evolution at both genomic and phenotypic levels specific to the predator–prey communities, highlighting the importance of the biotic selective pressure in shaping the evolution of both parties. Reciprocal adaptation in predator–prey communities As expected, all predator-only populations inoculated on glucose-minimal medium rapidly went extinct (within five cycles). All other populations of both predator and prey persisted for the duration of the experiment. Adaptation was quantified with pairwise competitions between coevolved vs. control-evolved prey in the presence and absence of predators (ancestral, coevolved, and control-evolved) and between coevolved vs. ancestral predators in the presence and absence of prey (ancestral and coevolved). We modified an existing NGS-based method for measuring frequencies of genetically distinct bacterial competitors (FreqSeq37) to simultaneously estimate competitor frequencies from hundreds of competition experiments (Multiplex FreqSeq (Supplementary Fig. 1, Supplementary Tables 2 and 3)). Control experiments showed Multiplex FreqSeq to estimate E. coli competitor frequencies with high accuracy (Supplementary Fig. 2). In direct competitions between coevolved vs. control-evolved prey, the coevolved populations were fitter in all three predator contexts but not in the absence of predators, thereby indicating adaptive evolution of prey in response to general predation pressure (ANOVA, predator treatment: F3,128 = 22.07, p = 1.4 × 10−11, one-sample t tests with Holm–Bonferroni correction; ancestral: t11 = 3.78, p = 0.012; coevolved: t11 = 2.95, p = 0.027; control-evolved: t11 = 3.42, p = 0.017; no predator: t11 = −1.32, p = 0.21; Fig. 1b, Supplementary Fig. 3). On the predator side, coevolved predators outcompeted their ancestors in all tested environments (including in the absence of prey; ANOVA, prey treatment: F2,57 = 4.908, p = 0.0115; one-sample t tests with Holm–Bonferroni correction; coevolved prey: t8 = 6.38, p = 6.4 × 10−4; ancestral prey: t10 = 2.39, p = 0.038; casitone: t10 = 2.99, p = 0.027, Fig. 1c, Supplementary Fig. 4), but did so to a greater degree while consuming coevolved prey than while consuming ancestral prey or casitone (Tukey multiple comparisons of means, coevolved vs. ancestral p = 0.047, coevolved vs. casitone p = 0.011). Although the prey vs. predator fitness competitions differ in design and are thus not directly comparable, their collective outcomes suggest that predator vs. prey adaptation may differ in patterns of specificity across evolutionary stages. Parallel evolution of mucoidy among prey populations Striking patterns of parallel evolution—both phenotypic and genomic—were detected uniquely in the predator–prey coevolution treatment. At the phenotypic level, most coevolved prey populations contained readily detectable frequencies of mucoid-colony forming variants during the experiment. For example, at cycle 18, mucoids were found in ten of the twelve coevolved populations (at frequencies ranging from ~0.0008 to ~0.2), whereas mucoidy was detected in only one prey-only control population (and there in only one colony, Supplementary Table 4). This nearly complete restriction of mucoid genotypes to predator–prey communities is extremely unlikely to have occurred by chance (exact binomial test, p < 0.001), strongly indicating that predation imposes positive selection for mucoidy, a known virulence trait in E. coli38 that also reduces susceptibility to phage infection in other species39. Multiple results point to predation-specific advantages of mucoidy. After a cycle of growth and predation under the same conditions as in the co-evolution experiment, the population sizes of both a non-mucoid and a mucoid isolate from the same co-evolved E. coli population (ME4) were reduced by M. xanthus relative to predator-free cultures (Fig. 2a). However, this negative effect of the predator was significantly lower for the mucoid strain (one-sided t test: t4 = −4.497, p = 0.005; Fig. 2a). Mucoidy is associated with reduced predator efficacy among coevolved prey. a End-of-cycle non-mucoid populations are reduced more by predation than mucoid populations (n = 3). b Swarming speed (relative to speed on ancestral prey) of the ancestral predator is slower through lawns of a coevolved mucoid strain than a non-mucoid strain (n = 3). Gray dots are individual replicates, black dots are means and error bars show 95% confidence intervals (t-distribution). c Qualitative predation assay of M. xanthus (center) on mucoid (left), non-mucoid (right), and ancestral (above) E. coli from seven coevolved populations. Source data are provided as a Source Data file Mucoidy was also associated with slower swarming of M. xanthus colonies through prey lawns, a parameter previously found to correlate with predatory killing and other measures of predatory performance across a broad range of prey types23. Colonies of our ancestral predator genotype swarmed more slowly through a lawn of the coevolved ME4 mucoid E. coli strain than through the ME4 non-mucoid strain (one-sided t test: t4 = −2.4386, p = 0.036, Fig. 2b). More broadly, we observed a generally reduced ability of M. xanthus to penetrate, dissolve and swarm through mucoid colonies relative to non-mucoid colonies across isolates from different replicate co-evolved communities (Fig. 2c). Collectively, our results suggest that secreted polymers causing the mucoid phenotype decrease susceptibility to predation, perhaps simply by hindering access of predator molecules to the prey cell wall. Parallel genome evolution among both prey and predators To investigate the impact of bacterial predator–prey coevolution on the genomic evolution of both predatory antagonists and victims, we sequenced the genomes of three clones from each coevolved and control-evolved population of predators and prey. Concomitant with parallel evolution of mucoid phenotypes among prey, coevolved populations of both predators and prey exhibited striking patterns of parallel genotypic evolution that control populations did not. Multiple loci among both prey and predator coevolved populations evolved in parallel (Fig. 3a, b; Supplementary Tables 5–8), including extreme parallelism at ompT (Fig. 3a) among prey and an uncharacterized locus (Mxan_RS27920) among predators (Fig. 3b). Parallel genomic evolution reveals adaptation specific to the coevolution treatment. a Parallel mutation of ompT among coevolved prey populations. b Parallel mutation of Mxan_RS27920 among coevolved predator populations. a, b Numbers in parentheses indicate the number of sampled clones (of three) in a population sharing the adjacent mutation. c Fitness of an ompT-deletion mutant in competition with the ompT+ ancestor in the presence and absence of the ancestral predator (n = 6 for both treatments). Gray dots are individual data points, black dots are means and error bars represent 95% confidence intervals (t-distribution). Source data are provided as a Source Data file The gene ompT, which encodes an outer-membrane protease40, was mutated in 11 out of the 12 coevolved prey populations (19 clones out of 36), with no individual mutation shared by clones from distinct populations. In contrast, this gene was not mutated in any of the six control-evolved populations (Fig. 3a). Many ompT mutations were multi-base deletions or generated premature stop codons, indicating predation-specific selection against ompT function. As predicted from the mutational patterns among evolved clones, experimental deletion of ompT in the ancestral E. coli genetic background conferred a significant fitness advantage when competitions were performed with predation pressure from M. xanthus but not in the absence of the predator (ANOVA, predator treatment: F1,16 = 8.42, p = 0.01, one-sample t tests with Holm–Bonferroni correction; presence: t11 = 3.96, p = 0.005 and absence: t11 = −1.006, p = 0.34; Fig. 3c, Supplementary Fig. 5). Among the predators, the Mxan_RS27920 locus was mutated or deleted in almost all clones from the 12 coevolved populations (34/36) but in none of the clones from casitone-evolved lineages, again with no individual mutations shared across populations. Accelerated genome evolution in predator–prey communities We tested for differences in the rate of genomic evolution between the coevolution vs. control treatments for both predators and prey, as measured by the average number of mutations present in evolved clones at the end of cycle 25. Even excluding mutator clones with elevated mutation rates41 (which appeared only in the coevolution treatment), prey that evolved under predation pressure accumulated ~2.7-fold more mutations on average than control-evolved prey clones (one-sided t test: t11 = 5.07, p = 0.0002, Fig. 4). Again, excluding mutators, genomic evolution was also accelerated among predators (~1.6-fold) in the coevolution treatment relative to control populations growing on casitone (one-sided t test: t11 = 2.49, p = 0.015, Fig. 4). Predator–prey interactions may be responsible for this effect, but due to supplementation of media with casitone rather than glucose in this treatment (to allow predator survival) the possibility of differential abiotic carbon-source effects on predator genome evolution cannot be excluded. Predator–prey coevolution accelerates genome evolution. Significantly more mutations were present among clones from coevolved prey and predator populations than in control populations (n = 3 clones for each species for most individual populations, total n = 33 and n = 18 for coevolved and control-evolved prey populations, respectively; n = 31 and n = 8 for coevolved and casitone-evolved predator populations, respectively). Mutator clones (three clones from one coevolved prey population and five clones from two coevolved predator populations) were excluded. Gray dots are the means across the included clones for each population, black dots are treatment means and error bars represent 95% confidence intervals (t-distribution) of treatment means. Source data are provided as a Source Data file High frequencies of mutator clones with elevated numbers of mutations were found in three of the twelve coevolved communities, once on the prey side and twice on the predator side. The three E. coli clones sampled from ME1 carry 15–23 mutations per clone and share a mutation in the mismatch repair gene mutL (Supplementary Table 5). The three M. xanthus clones sampled from ME4 carry 96–109 mutations, including shared mutations in the DNA repair genes mutS and recN (Supplementary Table 6) and two M. xanthus clones from ME8 (43 and 45 mutations total) also carry a mutS mutation (Supplementary Table 6). Overall, our results clearly show that bacterial predator-prey interactions increase genome evolution among prey, suggest the same for predators and also suggest that predation may promote the evolution of elevated mutation rates. In this study, fitness patterns indicate adaptation by both predators and prey specific to the coevolution treatment suggestive of early stages of Red Queen coevolution. In direct competition experiments between coevolved vs. control prey, the coevolved populations collectively exhibited higher fitness than the controls in the presence of all three predator types (ancestral, control-evolved, and co-evolved) but not in the absence of predation (Fig. 1b). These results suggest that coevolved prey adapted in response to traits shared by all predator types. In contrast, while coevolved predators appear to be more fit than their ancestors in all three examined environments, they have a significantly greater fitness advantage while growing on co-evolved prey than on ancestral prey (or on casitone in the absence of prey) (Fig. 1c). This pattern suggests that the predators may have adapted in response to earlier-evolved prey adaptations. Antagonistic coevolution is predicted to increase the pace of adaptive molecular evolution and this has been observed among rapidly coevolving phage15. However, such increased evolution has not to our knowledge been detected among prey or other predator types with slower generation times than phage15. In this study, genomic evolution was found to be greatly accelerated among both prey and predators in the two-species coevolution treatment compared to the respective single-species control populations. Even excluding co-evolved mutator clones, nearly threefold more mutations were present among coevolved prey genomes than in control prey (Fig. 4). Predation may have increased the proportion of prey mutations that are beneficial, the average magnitude of fitness benefits conferred by adaptive mutations or the basal mutation rate of prey, although we consider the latter hypothesis unlikely in the absence of mutator mutations. Predation might also have increased total mutation supply if co-evolving prey underwent more sequential generations of growth per cycle than control prey due to predatory killing, but mutation supply is also determined by the size of transferred populations and control-evolved prey population sizes were threefold higher than coevolved prey populations on average (Supplementary Fig. 6). The appearance of mutator clones exclusively in three coevolution communities (two in predator populations and one in prey) suggests that predatory interactions in such communities may increase the average benefit/cost ratio of mutator alleles41 relative to predation-free environments. Consistent with this hypothesis, mismatch-repair mutations (including one in mutS) also evolved in less that 200 generations in 25% of replicate populations of Pseudomonas fluorescens coevolving with a lytic phage, but not in phage-free control populations42. Mutator mutations may be at least transiently adaptive at the lineage level in some novel or highly variable environments by generating mutations beneficial at the individual level (including those contributing to 'epistatic adaptations' involving multiple mutations43) at a higher rate than non-mutators41, a scenario that may be promoted by predator–prey interactions. Strong patterns of parallel genomic evolution provide compelling molecular evidence of adaptation specific to the predator–prey coevolution treatment. On the predator side, all coevolved populations gained mutations in a gene (Mxan_RS27920, hereby named eatB for 'eat bacteria') that is predicted to encode a membrane protein belonging to the major facilitator superfamily44. Intriguingly, homologs of eatB are detected only in species within the suborder Cystobacterineae, suggesting a recent evolutionary origin of this gene within the myxobacteria (Myxococcales order) (Supplementary Table 9). eatB is predicted to be transcribed within an operon composed of two genes, neither of which has previously been associated with predation or any other function. Future investigation of how mutations in eatB enhance fitness during consumption of E. coli may provide novel insights into the molecular mechanisms of M. xanthus predation. Coevolved prey also displayed high genomic parallelism, with all but one population mutated at the outer-membrane protease gene ompT, a known virulence locus among uropathogenic E. coli40. The profile of mutations at this locus (Supplementary Table 5) indicates altered or lost gene function. Competition experiments with an ompT deletion mutant confirmed that loss of this gene conferred a fitness advantage specifically in the presence of predators (Fig. 3c). Given that OmpT deactivates some antimicrobial peptides produced by mammalian hosts40, it is unclear how its deactivation increases E. coli fitness during predation by M. xanthus. Intriguingly, antimicrobial activity by a zebrafish ribonuclease is activated by OmpT-mediated45 cleavage. It is thus conceivable that functional OmpT might activate a latent predation-enhancing function by cleaving an extracellular M. xanthus peptide. The selection hotspot loci in predators (eatB) and prey (ompT) exhibit distinct frequency patterns among the three-clone sets we genome-sequenced from each population at the final evolutionary time-point. In 11/12 coevolved M. xanthus populations, all three clones carry an eatB mutation (Fig. 4b, Supplementary Table 6), which suggests that eatB mutants may have reached or approached fixation in most populations (although the possibility of ancestral alleles persisting at low frequencies cannot be excluded without further sampling). In turn, such fixation would suggest the operation of directional selection, which is characteristic of both "arms race"5 (aka "escalatory") and "chase" forms of Red Queen dynamics (which differ in other respects)13. In contrast, among the sequenced three-clone sets from each of twelve coevolved E. coli populations, all but two are polymorphic at ompT (Fig. 4a, Supplementary Table 5). (Among the other two populations, one set had no ompT mutants and the other had three.) These ompT polymorphisms might reflect a snapshot of many populations simultaneously undergoing selective sweeps that were not yet complete. Alternatively, the prevalence of polymorphisms across populations suggests that ancestral and derived ompT alleles may be under fluctuating selection, which represents another form of Red Queen coevolution11,13,46. Multiple forms of Red Queen can operate in the same community over time11, but analysis of fitness patterns and genomic evolution at one evolutionary time point does not allow a clear test among them. However, the experimental system introduced here potentiates the future use of time-shift competition experiments11 and temporally extended analysis of genetic evolution to examine the character, dynamics and genetic basis of Red Queen predator–prey coevolution involving generalist bacterial predators. Theory predicts and empirical studies of both macroorganisms in natural communities47,48,49 and microbes in experimental communities have confirmed that predation pressure can strongly influence (and often increases) prey diversity. Such effects have been observed among bacterial prey in response to protists50, the specialist bacterial predator Bdellovibrio bacteriovorus27 and phage28. In our experiments, mucoidy—a virulence factor in some human pathogens38—evolved preferentially and pervasively as a minority phenotype among prey exposed to M. xanthus predation, showing that a generalist bacterial predator drives sympatric phenotype diversification of its prey. Intriguingly, mucoid colony phenotypes have also evolved in response to interaction with lytic phages39, macrophages38 and antibiotics51 and have been associated with increased survival of protist grazing52 connecting this phenotypic state to an extremely divergent range of antagonistic interactions. Our finding that multiple prey traits associated with virulence in pathogens (OmpT and mucoidy) were targets of adaptation specifically among coevolved prey suggests that virulence-trait evolution may often be indirectly shaped by selective forces unrelated to pathogenesis53. It will be of interest for future studies to systematically compare how individual bacterial prey species respond evolutionarily to distinct predators that differ greatly in their mechanisms of predation, life-history traits and metabolic niche. Some evolutionary responses such as mucoidy (or other alterations of the extracellular matrix) might jointly provide protection from multiple predator types due to a general effect of decreasing access of predatory weapons to the prey cell surface54. Other prey responses have also been found to synergistically carry-over across highly divergent predators. For example, in one study, predatory selection by protists indirectly decreased prey susceptibility to phage55. And yet, the radically different modes of predation exhibited by predatory microbes can be expected to often generate predator-specific prey adaptations. For example, bacteria evolve at specific surface proteins in response to phage infection56 that may be unlikely to also be targets of selection in response to protists (which consume whole prey cells16) or myxobacteria, which secrete predatory antibiotics and lytic enzymes21,33. In contrast, protists can impose selection on prey cell size50,57 that is unlikely to be similarly exerted by bacterial predators or phage. Coevolution in two-species bacterial communities rapidly induced major changes in rates and patterns of molecular evolution in both predators and prey and at multiple prey traits (OmpT and mucoidy) implicated in interactions with radically divergent antagonistic partners. In light of these powerful effects of predation in simple communities over a short evolutionary period, generalist predatory bacteria can be expected to drive diversification and shape the evolutionary dynamics of complex interaction networks in natural microbial communities50. Components of such networks include fitness relationships and competition modes among diverse prey species, interactions between prey and other categories of predators (e.g., protists and nematodes) and interactions among predators. Finally, the rapid evolutionary responses reported here suggest that experimental studies of bacterial predator–prey coevolution may helpfully inform considerations of using predatory bacteria as potential biocontrol agents in medicine58,59 and agriculture60,61. Ancestral strains Three subclones each of streptomycin-sensitive and resistant (K43R mutation in rpsL) variants of E. coli MG1655 (provided by Dr. Balazs Bogos) were isolated by streaking frozen glycerol stocks onto LB agar plates. These sub-clones were grown to stationary phase in LB medium (37 °C, 200 rpm for 8–10 h) and used to initiate 12 replicate populations coevolving with M. xanthus (two from each subclone) and six replicate prey-only populations (one from each sub-clone). LB broth and LB broth with agar (Lennox) were purchased as powders from Sigma-Aldrich and prepared as per manufacturer's instructions. M. xanthus strain DK3470 was used as the ancestral predator62. This strain has a mutation (previously inferred to be in or near the dsp62/dif 63 gene region) that greatly reduces extracellular matrix production62, and thus allows cultures to be readily dispersed in liquid buffer even after growth on an agar surface. Three subclones of rifampicin-sensitive and resistant variants of DK3470 were isolated from colonies grown in CTT soft 0.5% agar (10 g/l casitone, 10 mM Tris pH 8.0, 8 mM MgSO4, 1 mM KPO4) at 32 °C after dilution plating from mid-exponential phase 8 mL liquid CTT cultures (32 °C, 300 rpm). These subclones were isolated, stored frozen and used to initiate predator-only control populations—six on M9 minimal medium supplemented with glucose (or "prey-growth agar", details below, one population from each DK3470 subclone), six on M9-casitone medium (details below), and 12 populations coevolving with E. coli on M9 prey-growth agar (two from each subclone, Supplementary Table 1). Three of the casitone predator-only populations were discarded due to contamination during evolution. The remaining three were analyzed. Predator–prey coevolution arena The experimental coevolution protocol is summarized in Fig. 1a. Coevolution was performed in 50 mL flasks containing 8 mL of prey-growth agar solid media (1× M9 salts, 2 mM MgSO4, 0.1 mM CaCl2, 0.2% glucose, 1.5% agar). We adjusted the densities of E. coli and M. xanthus populations by resuspending, respectively, ~105 E. coli cells and ~109 M. xanthus per 50 µL in TPM buffer (10 mM Tris pH 8.0, 8 mM MgSO4, 1 mM KPO4). To initiate the coevolution populations, we mixed 50 µL each of adjusted prey and predator cultures and spread the total volume on prey-growth agar with 7–9 sterile glass beads. For prey-only controls, 50 µL of adjusted E. coli culture was mixed with 50 µL of TPM buffer. We set two types of predator-only controls by spreading the adjusted M. xanthus cultures with 50 µL of TPM buffer onto both prey-growth agar and onto medium identical to prey-growth agar except that glucose was replaced with 0.5% casitone (a pancreatic digest of casein that M. xanthus can utilize as a carbon-substrate for growth). All flasks dried in a laminar-flow hood for about 30 min before incubation at 32 °C, 90% rH for ~84 h. After ~84 h of incubation, the predator–prey communities were harvested by adding 5 mL TPM buffer and shaking for ~15 min at 300 rpm at 32 °C until the cultures were well suspended and dispersed. Evolving cultures were then reduced by a factor of 100 by mixing 1% (50 µL) of the resuspended volume with 50 µL of TPM buffer and then spreading onto fresh prey-growth agar as described above to initiate a new cycle. The evolution experiment ran for 25 such cycles, with frozen stocks made of evolving populations and communities after every second cycle starting at cycle 0 (in 20% glycerol, stored at −80 °C) and after the terminal cycle. The size of each predator and prey population was estimated every second cycle by dilution-plating onto LB hard agar (on which M. xanthus DK3470 does not grow) for prey and into CTT soft agar containing 10 µg/mL gentamicin (in which E. coli does not grow) for predators. The presence of both predator and prey populations at consistent end-of-cycle densities through the experiment (Supplementary Fig. 6) indicate that both predator and prey populations averaged at least ~6.64 generations (log2(100)) of replacement growth per cycle. Post-evolution fitness assays: competition experiments and FreqSeq analysis To assess reciprocal adaptation, we performed fitness assays by competing coevolved populations vs. control-evolved or ancestral populations (Fig. 1b, c, Supplementary Figs. 3 and 4). E. coli and M. xanthus were isolated from coevolved populations through inoculation in LB and gentamicin-CTT as described above. All competition experiments were performed in three temporally separate replicates with whole-population samples of evolved prey and/or predator populations. Prey populations were grown in LB as described above. Cultures were adjusted to an optical density (OD600) of ~1.0 (~108 cells/mL) and 500 µL each of competitors with opposite streptomycin-resistance marker types were mixed. We diluted these mixes (1:100) and inoculated 50 µL together with 50 µL of either a predator culture (~5 × 109 cells in TPM) or TPM buffer and spread onto prey-growth agar as described above. Predator cultures (ancestral, control-evolved and coevolved) were incubated in liquid gentamicin-CTT (10 µg/mL, 32 °C, 300 rpm) for ~60 h, upon which they were diluted and grown for further 8–10 h to obtain mid-exponential cultures (OD600 ~0.5–0.8), which were subsequently centrifuged and resuspended in TPM buffer after supernatant removal. All twelve coevolved and all 6 control-evolved prey populations were competed against the ancestral prey (of the opposite marker state) in the presence of all three predator categories (Supplementary Fig. 7). Preliminary results (two replicates) indicated that both control and coevolved populations increased greatly in fitness relative to the ancestors, suggesting general adaptation to the abiotic conditions of the prey-growth agar selection regime. To better resolve potential adaptive responses by the coevolved prey specific to the presence of predators, we competed coevolved prey directly with control-evolved prey in the same flask. Each of the control-evolved prey populations of one marker type was competed independently against two of the coevolved prey populations of the opposite marker type. Thus, each of the twelve coevolved prey populations was included in one competition pair whereas each control-evolved prey population was included in two. Each of these direct competitions was subjected to four predator treatments: (i) the sympatrically coevolved predator, (ii) a control predator population that evolved on casitone medium in the absence of prey, (iii) the ancestral predator, (iv) no predator. In addition, competition flasks were incubated at 32 °C, 90% rH for 84 h and harvested using the aforementioned protocol. We centrifuged 2 mL of resuspended culture (14,000g, 5 min) and stored the pellets at −20 °C until lysate preparation. Pellets of the initial mixes were obtained and stored similarly. Because all predator-only populations inoculated in prey-growth agar went extinct and predators growing on casitone are not perfect controls for the coevolution treatment due to the substitution of carbon source, we competed coevolved predators with their ancestors to test for predator adaptation. In preliminary experiments, we found that predator populations initially decline substantially when transferred directly from CTT to prey-growth agar prior to subsequently growing on prey. Thus, for competition experiments we first pre-conditioned predator populations separately for three 84-h cycles on prey-growth agar in the presence of the prey used in the subsequent assay. After three cycles of acclimatization, populations were harvested as described before and stored at −20 °C for lysate preparation. We prepared competition mixes and inoculated 100 µL of each mix onto prey-growth agar. The competition experiment was conducted for another three cycles. The remainder of the initial (T0) mixes was spun down and stored at −20 °C for lysate preparation. Lysates of both prey and predator competition assays were prepared using the Triton X-100 protocol described by Goldenberger et al.64. The resulting supernatant was used for the Multiplex FreqSeq library preparation described below. We modified the FreqSeq method37 to introduce a second barcode which allows multiplexing. Multiplex FreqSeq is described in Supplementary Fig. 1 and the validation method in Supplementary Fig. 2. The single-nucleotide polymorphisms responsible for antibiotic sensitivity/resistance to streptomycin (rpsL) and rifampicin (rpoB) were used to distinguish competitors with FreqSeq analysis for E. coli and M. xanthus, respectively. A combination of two barcodes was used to label all populations involved in the competitions: each pair of right-side and left-side barcodes therefore identified one replicate of a given competition treatment. The primers and PCR conditions for Multiplex FreqSeq are detailed in Supplementary Tables 2 and 3. DNA libraries were normalized, pooled, and purified (Ampure XP beads) and their final sizes and concentrations were confirmed using an Agilent Bioanalyser. The pooled library was then diluted to 4 nM in water and run on a MiSeq machine at the Genomic Diversity Centre at ETH Zürich after mixing with 30% standard PhiX library. We ran one and nine MiSeq sequencing for prey and predator competitions, respectively. Data from MiSeq were checked for quality using FastQC and Illumina adapters were removed using Trimmomatic v0.3365 with the parameters: ILLUMINACLIP:adapter.fa:2:30:10, to leave the six nucleotide barcode at the 3′ end. The reads were demultiplexed using the barcode splitter algorithm in the FASTX-Toolkit (http://hannonlab.cshl.edu/fastx_toolkit/). Demultiplexed reads were stored as separate fastq files for each right-side barcode and analyzed using the Freq-Out program37 to obtain frequencies of each competitor from pre- and post-competition mixes. A bash script for running each step of data analysis in a single script is provided in the Supplementary Note 1. Frequency estimates from unexpected barcodes, with less than 10,000 reads or beyond the range tested in the validation assay (0.05–0.95, Supplementary Fig. 2) were discarded. We calculated the relative fitness of the coevolved prey or predators using the formula of Ross-Gillespie et al.66 $$v = \frac{{x2\left( {1 - x1} \right)}}{{x1\left( {1 - x2} \right)}}$$ where x1 and x2 are initial and final frequencies of the coevolved cells, respectively. Phenotypic evolution of prey: mucoidy and predation resistance To screen for mucoid variants at cycle 18, aliquots of frozen stocks of coevolved and control-evolved prey populations were thawed in 100 µL TPM buffer, dilution plated onto LB agar and examined for the presence of mucoid colonies after overnight incubation at 32 °C, 90% rH. We subsequently conducted a series of experiments to assess whether mucoidy may confer resistance against the predators. First, we quantitatively assayed the efficacy of ancestral M. xanthus in reducing prey population size using one isolated mucoid clone and one non-mucoid clone from population ME4 (cycle 18). We compared the effect of M. xanthus on the end-of-cycle population sizes of different prey strains under the same conditions as in the evolution treatments that included prey (Fig. 2a). After 84 h, we assessed prey population size in the presence and absence of M. xanthus by dilution-plating on LB agar and calculated the percentage reduction of each type caused by the presence of predator. In experiments under the same conditions, we estimated exponential population growth rates of the same mucoid and non-mucoid strains in the absence of predator (from 5 to 20 h of growth), but no difference between the strains was detected (r = 0.52 and 0.5, respectively, two-sided t test: t4 = −0.2804, p = 0.79). In these experiments, multiple growth cultures were established for each strain within each replicate and population size at each time point was assessed by whole-culture destructive sampling. We also measured the swarming rate of the predator on the different prey types (coevolved mucoid and non-mucoid isolates from ME4 and ancestral, Fig. 2b). We spotted ancestral M. xanthus (10 µL at 5 × 109 cells/mL) on top of a 48-h old E. coli lawn on prey-growth agar and allowed it to swarm for 7 days (32 °C, 90% rH). Swarm edges were outlined after 1 day and 7 days, and migration distances were measured on opposite sides of at least three separate transects (thus at least six measurements per plate). For all assays, we performed three biological replicates with two technical replicates each. Finally, we qualitatively assessed the ability of an expanding predator swarm to penetrate and lyse adjacent colonies of coevolved mucoid, coevolved non-mucoid and ancestral (non-mucoid) prey (Fig. 2c) for seven coevolved communities. For each, one coevolved mucoid clone, one coevolved non-mucoid clone and ancestral E. coli were grown in LB to OD600 ~1.0 and 10 µL were spotted on both prey-growth and on CTT agar at a 1-cm distance from a 10 µL spot of ancestral M. xanthus (5 × 109 cells/mL). The plates were incubated (32 °C, 90% rH) and predator–prey colony interface phenotypes were observed every 24 h. The image shown in Fig. 2c was taken after 5 days of incubation. Whole-genome sequencing After cycle 25 of the evolution experiment, three independent predator and/or prey clones each were randomly picked from either LB agar (prey), or gentamycin-CTT agar (predators) for all control-evolved and coevolved populations. Isolated predator and prey clones were grown to high density in 8 mL CTT and LB liquid medium, respectively, and centrifuged at 4000g for 15 min. Pellets were stored at −80°C until DNA was extracted. Whole-genomic DNA was isolated with Qiagen's Genomic DNA extraction buffer kit and 20/G Genomic-tips. DNA quantity was checked with a Qubit fluorometer (Thermo Fisher Scientific). Genomes of predator and prey clones were sequenced on different Illumina® HiSeq® 2500 sequencing machines in paired-end mode; prey populations were processed by Fasteris (Geneva, Switzerland) producing reads of 125 bp while predator genomes were handled by the D-BSSE Quantitative Genomics Facility of ETH Zürich (Basel, Switzerland), also with read lengths of 125 bp. Read quality was assessed using FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/). Illumina-specific adapters/primers and low quality base calls were trimmed from reads using Trimmomatic v0.3365 with the following basic parameters: ILLUMINACLIP:Nextera + TruSeq3-PE2.fa:2:25:10 CROP:124 HEADCROP:5 LEADING:30 TRAILING:28 SLIDINGWINDOW:4:28 MINLEN:77. The trimmed reads from E. coli prey populations were subsequently mapped to the E. coli str. K-12 substr. MG1655 reference genome67 using breseq 0.27.068. The latter step also involved error correction and variant calling (based on samtools v 1.3.169; see Supplementary Tables 5 and 7 for a mutation summary for all prey populations). The trimmed reads for all the predator clones were mapped to a modified version of the reference genome of M. xanthus str. DK1622 which contains all mutations present in the experiment's founding clone, DK3470, relative to its parent strain, M. xanthus str. DK1622 (refseq: NC_008095). The operonic context of eatB was predicted at http://www.microbesonline.org/operons/gnc246197.html. Inference of the reference genome for M. xanthus DK3470 To infer the reference genome of the ancestral DK3470 clone, we first relied on a whole genome assembly of the same Illumina® reads using SPAdes v. 3.11.1 (with parameters -k 21,33,55,77 --careful), from which we detected the previously reported Tn5-transposon that inserted in the coding region of Mxan_RS3243549 and includes the transposase and genes conferring resistance to the antibiotics neomycin/kanamycin, bleomycin and streptomycin70. That transposon was previously reported to have co-localized with an unknown additional mutation, responsible for the non-cohesive growth of DK347062. We identified this unknown mutation by mapping the trimmed Illumina® reads against the published reference genome of isogenic M. xanthus str. DK1622 using breseq 0.27.0. We find that the mutation is a single base-pair deletion in an intergenic region, 43 bp downstream of Mxan_RS32420 (difA) and 1386 bp upstream of Mxan_RS32425 (thiL). This mutation is also present in all evolved M. xanthus descendants sequenced for this study. We included these two positional differences in the reference M. xanthus strain DK1622 and used that modified sequence for mapping of our predators. ΔompT mutant construction ΔompT-mutants were constructed by replacing the coding sequence of the ompT gene with a kanamycin resistance marker via recombineering71. The streptomycin-sensitive and resistant ancestral E. coli strains were transformed with the recombineering plasmid pSIM6 and maintained at 30°C degrees with 100 mg/L ampicillin in the media. The kanamycin marker was amplified from the KEIO strain JW055472 with PCR primers ompT-F (GTTACATTGAAATGGCTAGTTATTCCCC) and ompT-R (CAGTGGAGCAATATGTAATTGACTC). The purified PCR product was then used to replace ompT in both E. coli strains with pSIM6 (for a detailed recombineering protocol, see71). Positive clones were confirmed by growth on media with 100 mg/L kanamycin and sequencing with the aforementioned primers. To ensure a clean genetic background, P1 transduction was performed using the constructed ΔompT strains as donor and the ancestral strains as recipient73. Positive-P1 transductants were confirmed by kanamycin resistance, colony-PCR and sequencing. All the subsequent fitness assays were performed using the P1-transduced strains. Relative fitness of ΔompT prey We estimated the relative fitness of ΔompT prey in competition with ancestral E. coli under the conditions of the post-evolution fitness assays (Fig. 3c, Supplementary Fig. 5). Wildtype strains competed against ΔompT mutant of opposite marker for one cycle (84 h) in the prey-growth medium in the presence and in the absence of ancestral predators. Initial and final frequencies of each strain were determined by dilution-plating onto LB agar (plain and supplemented with 100 µg/mL streptomycin). To control for any marker effect, we also conducted competitions with clones of the opposite marker type but same ompT genotype. Relative fitness was calculated as described above. At least six independent replicates for each competition were performed. We analyzed relative fitness of prey and predators with analysis of variance (ANOVA type II). For coevolved prey, we used predator treatment (coevolved, control-evolved, ancestral or no predator), population ID (ME1 to ME12) and competitor ID (control-evolved E. coli E1–E6) as factors. For coevolved predators, we used prey treatment (coevolved, ancestral or casitone), population ID (ME1–ME12) and competitor ID (rifampicin-resistant or sensitive ancestor) as factors. Post hoc tests were Tukey-tests for multiple comparisons and one-sample t tests with Holm–Bonferroni corrections. For the t tests, data were the mean of three replicate competitions per coevolved population so that the unit of replication was each coevolved population (n = 12). For the prey ompT mutant, explanatory factors were predator treatment (presence or absence), time of the experiment (day 1 or day 2) and the resistance phenotype of the mutant (streptomycin resistant or sensitive). We log-transformed all relative fitness data to meet the assumption of normality. We tested for differences in the rate of molecular evolution between coevolved vs control-evolved populations using one-sided, two-sample t tests. Evolutionary rate was quantified as the average number of mutations found across all populations within each treatment (with each population represented by two or three independently isolated and sequenced clones) after 25 cycles of evolution. Statistical analyses were performed using R studio (version 1.0.136)74 and the packages car75 and ggplot76. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Genome sequences have been deposited in the SRA database under BioProject accession PRJNA551936 (BioSample accessions SAMN12169214–SAMN12169315). All other data generated or analyzed during this study are included in this published article (and its Supplementary information files). The source data underlying Figs. 1b, c, 2a, b, 3c, 4, and Supplementary Figs. 1–3 and 5–7 are provided as a Source Data file. Leibold, M. A., Chase, J. M., Shurin, J. B. & Downing, A. L. Species turnover and the regulation of trophic structure. Annu. Rev. Ecol. Syst. 28, 467–494 (1997). Hairston, N. G., Smith, F. E. & Slobodkin, L. B. Community structure, population control, and competition. Am. Nat. 94, 421–425 (1960). Estes, J., Crooks, K. & Holt, R. D. Predators, Ecological Role of. in Encyclopedia of Biodiversity 229–249 (Elsevier, 2013). https://doi.org/10.1016/B978-0-12-384719-5.00117-9 Sinclair, A. R. E., Mduma, S. & Brashares, J. S. Patterns of predation in a diverse predator-prey system. Nature https://doi.org/10.1038/nature01934 (2003). ADS CAS PubMed Article Google Scholar Brodie III, E. D., Brodie, E. D. Jr. & Brodie, E. D. Predator-prey arms races. Bioscience 49, 557–568 (1999). Hass, C. C. & Valenzuela, D. Anti-predator benefits of group living in white-nosed coatis (Nasua narica). Behav. Ecol. Sociobiol. 51, 570–578 (2002). Stevens, M. & Merilaita, S. Animal camouflage: current issues and new perspectives. Philos. Trans. R. Soc. B Biol. Sci. 364, 423–427 (2009). Matz, C. et al. Marine biofilm bacteria evade eukaryotic predation by targeted chemical defense. PLoS ONE 3, e2744 (2008). ADS PubMed PubMed Central Article Google Scholar Estes, J. A., Burdin, A. & Doak, D. F. Sea otters, kelp forests, and the extinction of Steller's sea cow. Proc. Natl Acad. Sci. https://doi.org/10.1073/pnas.1502552112 (2016). ADS Article Google Scholar Van Valen, L. A new evolutionary law. Evol. Theory 1, 1–30 (1973). Hall, A. R., Scanlan, P. D., Morgan, A. D. & Buckling, A. Host-parasite coevolutionary arms races give way to fluctuating selection. Ecol. Lett. 14, 635–642 (2011). Brockhurst, M. A. & Koskella, B. Experimental coevolution of species interactions. Trends Ecol. Evol. 28, 367–375 (2013). Brockhurst, M. A. et al. Running with the Red Queen: the role of biotic conflicts in evolution. Proc. R. Soc. B Biol. Sci. 281, 20141382 (2014). Van Valen, L. Molecular evolution as predicted by natural selection. J. Mol. Evol. 3, 89–101 (1974). ADS PubMed Article Google Scholar Paterson, S. et al. Antagonistic coevolution accelerates molecular evolution. Nature 464, 275–278 (2010). ADS CAS PubMed PubMed Central Article Google Scholar Fenchel, T. Ecology of Protozoa. (Springer, Berlin, Heidelberg, 1987). https://doi.org/10.1007/978-3-662-06817-5 Simek, K. & Chrzanowski, T. H. Direct and indirect evidence of size-selective grazing on pelagic bacteria by freshwater nanoflagellates. Appl. Environ. Microbiol. 58, 3715–3720 (1992). Pérez, J., Moraleda-Muñoz, A., Marcos-Torres, F. J. & Muñoz-Dorado, J. Bacterial predation: 75 years and counting! Environ. Microbiol. 18, 766–779 (2016). Jurkevitch, E. Predatory prokaryotes: biology, ecology and evolution. (Springer Science & Business Media, 2006). Reichenbach, H. The ecology of the myxobacteria. Environ. Microbiol. 1, 15–21 (1999). Muñoz-Dorado, J., Marcos-Torres, F. J., García-Bravo, E., Moraleda-Muñoz, A. & Pérez, J. Myxobacteria: moving, killing, feeding, and surviving together. Front. Microbiol. 7, 781 (2016). Morgan, A. D., MacLean, R. C., Hillesland, K. L. & Velicer, G. J. Comparative analysis of Myxococcus predation on soil bacteria. Appl. Environ. Microbiol. 76, 6920–6927 (2010). Mendes-Soares, H. & Velicer, G. J. Decomposing predation: testing for parameters that correlate with predatory performance by a social bacterium. Microb. Ecol. 65, 415–423 (2013). Jurkevitch, E. A brief history of short bacteria: a chronicle of Bdellovibrio (and like organisms) research in Predatory Prokaryotes. (eds Jurkevitch, E. & Steinbüchel A.) 4, 1–9 (Springer, Berlin, Heidelberg, 2007). de Jonge, P. A., Nobrega, F. L., Brouns, S. J. J. & Dutilh, B. E. Molecular and evolutionary determinants of bacteriophage host range. Trends Microbiol. https://doi.org/10.1016/j.tim.2018.08.006 (2019). Huang, W., Traulsen, A., Werner, B., Hiltunen, T. & Becks, L. Dynamical trade-offs arise from antagonistic coevolution and decrease intraspecific diversity. Nat. Commun. https://doi.org/10.1038/s41467-017-01957-8 (2017). Gallet, R. et al. Predation and disturbance interact to shape prey species diversity. Am. Nat. 170, 143–154 (2007). Brockhurst, M. A., Morgan, A. D., Fenton, A. & Buckling, A. Experimental coevolution with bacteria and phage: The Pseudomonas fluorescens—Φ2 model system. Infect. Genet. Evol. 7, 547–552 (2007). Hillesland, K. L., Velicer, G. J. & Lenski, R. E. Experimental evolution of a microbial predator's ability to find prey. Proc. Biol. Sci. 276, 459–467 (2009). Kaper, J. B., Nataro, J. P. & Mobley, H. L. T. Pathogenic Escherichia coli. Nat. Rev. Microbiol. 2, 123–140 (2004). Ishii, S., Ksoll, W. B., Hicks, R. E. & Sadowsky, M. J. Presence and growth of naturalized Escherichia coli in temperate soils from Lake Superior watersheds. Appl. Environ. Microbiol. 72, 612–621 (2006). Marshall, R. C. & Whitworth, D. E. Is "Wolf‐Pack" predation by antimicrobial bacteria cooperative? Cell behaviour and predatory mechanisms indicate profound selfishness, even when working alongside kin. BioEssays 41, 1800247 (2019). Berleman, J. E. et al. The lethal cargo of Myxococcus xanthus outer membrane vesicles. Front. Microbiol. 5, 1–43 (2014). Shimkets, L. J. Intercellular signaling during fruiting-body development of Myxococcus xanthus. Annu. Rev. Microbiol. 53, 525–549 (1999). Rosenberg, E., Keller, K. H. & Dworkin, M. Cell density dependent growth of Myxococcus xanthus on casein. J. Bacteriol. 129, 770–777 (1977). Velicer, G. J. & Vos, M. Sociobiology of the Myxobacteria. Annu. Rev. Microbiol. 63, 599–623 (2009). Chubiz, L. M., Lee, M.-C., Delaney, N. F. & Marx, C. J. FREQ-Seq: a rapid, cost-effective, sequencing-based method to determine allele frequencies directly from mixed populations. PLoS ONE 7, e47959 (2012). Miskinyte, M. et al. The genetic basis of Escherichia coli pathoadaptation to macrophages. PLoS Pathog. 9, e1003802 (2013). Scanlan, P. D. & Buckling, A. Co-evolution with lytic phage selects for the mucoid phenotype of Pseudomonas fluorescens SBW25. ISME J. 6, 1148–1158 (2012). Hui, C.-Y. et al. Escherichia coli outer membrane protease OmpT confers resistance to urinary cationic peptides. Microbiol. Immunol. 54, 452–459 (2010). Giraud, A. et al. Costs and benefits of high mutation rates: adaptive evolution of bacteria in the mouse gut. Science 291, 2606–2608 (2001). Pal, C., Maciá, M. D., Oliver, A., Schachar, I. & Buckling, A. Coevolution with viruses drives the evolution of bacterial mutation rates. Nature 450, 1079–1081 (2007). Wistrand-Yuen, E. et al. Evolution of high-level resistance during low-level antibiotic exposure. Nat. Commun. 9, 1599 (2018). Reddy, V. S., Shlykov, M. A., Castillo, R., Sun, E. I. & Saier, M. H. The major facilitator superfamily (MFS) revisited. FEBS J. https://doi.org/10.1111/j.1742-4658.2012.08588.x (2012). Zanfardino, A., Pizzo, E., Di Maro, A., Varcamonti, M. & D'Alessio, G. The bactericidal action on Escherichia coli of ZF-RNase-3 is triggered by the suicidal action of the bacterium OmpT protease. FEBS J. 277, 1921–1928 (2010). Stenseth, N. C. & Smith, J. M. Coevolution in ecosystems: red queen evolution or stasis? Evolution 38, 870 (1984). Paine, R. T. A note on trophic complexity and community stability. Am. Nat. 103, 91–93 (1969). Nosil, P. & Crespi, B. J. Experimental evidence that predation promotes divergence in adaptive radiation. Proc. Natl Acad. Sci. https://doi.org/10.1073/pnas.0601575103 (2006). Vamosi, S. M. On the role of enemies in divergence and diversification of prey: a review and synthesis. Can. J. Zool. https://doi.org/10.1139/z05-063 (2005). Johnke, J. et al. A generalist protist predator enables coexistence in multitrophic predator-prey systems containing a phage and the bacterial predator Bdellovibrio. Front. Ecol. Evol. 5, 124 (2017). Piña, S. E. & Mattingly, S. J. The role of fluoroquinolones in the promotion of alginate synthesis and antibiotic resistance in Pseudomonas aeruginosa. Curr. Microbiol. 35, 103–108 (1997). Matz, C., Deines, P. & Jürgens, K. Phenotypic variation in Pseudomonas sp. CM10 determines microcolony formation and survival under protozoan grazing. FEMS Microbiol. Ecol. 39, 57–65 (2002). Sun, S., Noorian, P. & McDougald, D. Dual role of mechanisms involved in resistance to predation by protozoa and virulence to humans. Front. Microbiol. 9, 1017 (2018). Scholl, D., Adhya, S. & Merril, C. Escherichia coli K1's capsule is a barrier to bacteriophage T7. Appl. Environ. Microbiol. 71, 4872–4874 (2005). Örmälä-Odegrip, A.-M. et al. Protist predation can select for bacteria with lowered susceptibility to infection by lytic phages. BMC Evol. Biol. 15, 81 (2015). Seed, K. D. Battling phages: how bacteria defend against viral attack. PLOS Pathog. https://doi.org/10.1371/journal.ppat.1004847 (2015). Sherr, E. B. & Sherr, B. F. Significance of predation by protists in aquatic microbial food webs. Antonie Van Leeuwenhoek 81, 293–308 (2002). Willis, A. R. et al. Injections of predatory bacteria work alongside host immune cells to treat shigella infection in zebrafish larvae. Curr. Biol. 26, 3343–3351 (2016). Kadouri, D. E., To, K., Shanks, R. M. Q., Doi, Y. & Shutt, K. A. Predatory bacteria: a potential ally against multidrug-resistant gram-negative pathogens. PLoS ONE 8, e63397 (2013). Scherff, R. H. Control of bacterial blight of soybean by Bdellovibrio bacteriovorus. Phytopathology 63, 400–402 (1973). Bull, C. T., Shetty, K. G. & Subbarao, K. V. Interactions between myxobacteria, plant pathogenic fungi, and biocontrol agents. Plant Dis. 86, 889–896 (2002). Shimkets, L. J. Role of cell cohesion in Myxococcus xanthus fruiting body formation. J. Bacteriol. 166, 842–848 (1986). Yang, Z. et al. Myxococcus xanthus dif genes are required for biogenesis of cell surface fibrils essential for social gliding motility. J. Bacteriol. https://doi.org/10.1128/JB.182.20.5793-5798.2000 (2000). Goldenberger, D., Perschil, I., Ritzler, M. & Altwegg, M. A simple 'universal' DNA extraction procedure using SDS and proteinase K is compatible with direct PCR amplification. PCR Methods Appl. 4, 368–370 (1995). Bolger, A. M., Lohse, M. & Usadel, B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics 30, 2114–2120 (2014). Ross-Gillespie, A., Gardner, A., West, S. A. & Griffin, A. S. Frequency dependence and cooperation: theory and a test with bacteria. Am. Nat. 170, 331–342 (2007). Blattner, F. R. et al. The complete genome sequence of Escherichia coli K-12. Science 277, 1453–1462 (1997). Barrick, J. E. et al. Identifying structural variation in haploid microbial genomes from short-read resequencing data using breseq. BMC Genom. 15, 1 (2014). Li, H. A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data. Bioinformatics https://doi.org/10.1093/bioinformatics/btr509 (2011). Kuspa, A. et al. Genes required for developmental signalling in Myxococcus xanthus: three asg loci. J. Bacteriol. 171, 2762–2772 (1989). Sharan, S. K., Thomason, L. C., Kuznetsov, S. G. & Court, D. L. Recombineering: a homologous recombination-based method of genetic engineering. Nat. Protoc. https://doi.org/10.1038/nprot.2008.227 (2009). Baba, T. et al. Construction of Escherichia coli K-12 in-frame, single-gene knockout mutants: the Keio collection. Mol. Syst. Biol. 2, 2006.0008 (2006). Thomason, L. C., Costantino, N. & Court, D. L. E. coli genome manipulation by P1 transduction. in Current Protocols in Molecular Biology Chapter 1, 1.17.1–1.17.8 (John Wiley & Sons, Inc., 2007). RStudio Team. RStudio: Integrated Development for R. (RStudio, Inc., Boston, MA) http://www.rstudio.com https://doi.org/10.1007/978-81-322-2340-5 (2016). Fox, J. et al. An R Companion to Applied Regression, Second Edition. R topics documented (2014). Wickham, H., Winston, C. & R Studio. ggplot2: create elegant data visualisations using the grammar of graphics. CRAN https://doi.org/10.1093/bioinformatics/btr406 (2016). The authors thank Fabienne Benz, Francesca Fiegna, Samay Pande, and Jessica Plucain for helping with the experiments, Balazs Bogos for providing the strains, Silvia Kobel, Niklaus Zemp, and Aria Maya Minder Pfyl from the Genomic Diversity Centre at ETH Zürich for help in developing Multiplex FreqSeq and Edouard Jurkevitch and additional reviewers for helpful comments on the paper. This study was supported in part by a Swiss National Science Foundation (SNSF) grant to GJV (31003A_16005) and an ETH Fellowship to M.V. These authors contributed equally: Ramith R. Nair, Marie Vasse. Institute for Integrative Biology, ETH Zürich, Zürich, 8092, Switzerland Ramith R. Nair, Marie Vasse, Sébastien Wielgoss, Lei Sun, Yuen-Tsu N. Yu & Gregory J. Velicer Department of Systems Biology, Harvard Medical School, 02115, Boston, MA, USA Lei Sun Ramith R. Nair Marie Vasse Sébastien Wielgoss Yuen-Tsu N. Yu Gregory J. Velicer R.R.N., G.J.V. and M.V. designed the experiments. R.R.N., L.S. and M.V. performed the experiments. Y.-T.N.Y. and S.W. developed methods. R.R.N., G.J.V., and M.V. analyzed the data. R.R.N., G.J.V. and M.V. wrote the paper. All authors read and approved the paper. Correspondence to Ramith R. Nair or Marie Vasse. Peer review information Nature Communications thanks Edouard Jurkevitch and the other, anonymous, reviewers for their contribution to the peer review of this work. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Nair, R.R., Vasse, M., Wielgoss, S. et al. Bacterial predator-prey coevolution accelerates genome evolution and selects on virulence-associated prey defences. Nat Commun 10, 4301 (2019). https://doi.org/10.1038/s41467-019-12140-6 Unraveling negative biotic interactions determining soil microbial community assembly and functioning Sana Romdhane Aymé Spor Laurent Philippot The ISME Journal (2022) Enhanced mutualistic symbiosis between soil phages and bacteria with elevated chromium-induced environmental stress Dan Huang Pingfeng Yu Pedro J. J. Alvarez Microbiome (2021) Functional genomics study of Pseudomonas putida to determine traits associated with avoidance of a myxobacterial predator Shukria Akbar D. Cole Stevens Scientific Reports (2021) Hidden paths to endless forms most wonderful: Complexity of bacterial motility shapes diversification of latent phenotypes Olaya Rendueles BMC Evolutionary Biology (2020) Editors' Highlights Top 50 Articles Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Szemerédi–Trotter theorem The Szemerédi–Trotter theorem is a mathematical result in the field of Discrete geometry. It asserts that given n points and m lines in the Euclidean plane, the number of incidences (i.e., the number of point-line pairs, such that the point lies on the line) is $O\left(n^{2/3}m^{2/3}+n+m\right).$ This bound cannot be improved, except in terms of the implicit constants. As for the implicit constants, it was shown by János Pach, Radoš Radoičić, Gábor Tardos, and Géza Tóth[1] that the upper bound $2.5n^{2/3}m^{2/3}+n+m$ holds. Since then better constants are known due to better crossing lemma constants; the current best is 2.44.[2] On the other hand, Pach and Tóth showed that the statement does not hold true if one replaces the coefficient 2.5 with 0.42.[3] An equivalent formulation of the theorem is the following. Given n points and an integer k ≥ 2, the number of lines which pass through at least k of the points is $O\left({\frac {n^{2}}{k^{3}}}+{\frac {n}{k}}\right).$ The original proof of Endre Szemerédi and William T. Trotter was somewhat complicated, using a combinatorial technique known as cell decomposition.[4][5] Later, László Székely discovered a much simpler proof using the crossing number inequality for graphs.[6] (See below.) The Szemerédi–Trotter theorem has a number of consequences, including Beck's theorem in incidence geometry and the Erdős-Szemerédi sum-product problem in additive combinatorics. Proof of the first formulation We may discard the lines which contain two or fewer of the points, as they can contribute at most 2m incidences to the total number. Thus we may assume that every line contains at least three of the points. If a line contains k points, then it will contain k − 1 line segments which connect two consecutive points along the line. Because k ≥ 3 after discarding the two-point lines, it follows that k − 1 ≥ k/2, so the number of these line segments on each line is at least half the number of incidences on that line. Summing over all of the lines, the number of these line segments is again at least half the total number of incidences. Thus if e denotes the number of such line segments, it will suffice to show that $e=O\left(n^{2/3}m^{2/3}+n+m\right).$ Now consider the graph formed by using the n points as vertices, and the e line segments as edges. Since each line segment lies on one of m lines, and any two lines intersect in at most one point, the crossing number of this graph is at most the number of points where two lines intersect, which is at most m(m − 1)/2. The crossing number inequality implies that either e ≤ 7.5n, or that m(m − 1)/2 ≥ e3 / 33.75n2. In either case e ≤ 3.24(nm)2/3 + 7.5n, giving the desired bound $e=O\left(n^{2/3}m^{2/3}+n+m\right).$ Proof of the second formulation Since every pair of points can be connected by at most one line, there can be at most n(n − 1)/2 lines which can connect at k or more points, since k ≥ 2. This bound will prove the theorem when k is small (e.g. if k ≤ C for some absolute constant C). Thus, we need only consider the case when k is large, say k ≥ C. Suppose that there are m lines that each contain at least k points. These lines generate at least mk incidences, and so by the first formulation of the Szemerédi–Trotter theorem, we have $mk=O\left(n^{2/3}m^{2/3}+n+m\right),$ and so at least one of the statements $mk=O(n^{2/3}m^{2/3}),mk=O(n)$, or $mk=O(m)$ is true. The third possibility is ruled out since k was assumed to be large, so we are left with the first two. But in either of these two cases, some elementary algebra will give the bound $m=O(n^{2}/k^{3}+n/k)$ as desired. Optimality Except for its constant, the Szemerédi–Trotter incidence bound cannot be improved. To see this, consider for any positive integer $N\in \mathbb {N} $ a set of points on the integer lattice $P=\left\{(a,b)\in \mathbb {Z} ^{2}\ :\ 1\leq a\leq N;1\leq b\leq 2N^{2}\right\},$ :\ 1\leq a\leq N;1\leq b\leq 2N^{2}\right\},} and a set of lines $L=\left\{(x,mx+b)\ :\ m,b\in \mathbb {Z} ;1\leq m\leq N;1\leq b\leq N^{2}\right\}.$ :\ m,b\in \mathbb {Z} ;1\leq m\leq N;1\leq b\leq N^{2}\right\}.} Clearly, $|P|=2N^{3}$ and $|L|=N^{3}$. Since each line is incident to N points (i.e., once for each $x\in \{1,\cdots ,N\}$), the number of incidences is $N^{4}$ which matches the upper bound.[7] Generalization to $\mathbb {R} ^{d}$ One generalization of this result to arbitrary dimension, $\mathbb {R} ^{d}$, was found by Agarwal and Aronov.[8] Given a set of n points, S, and the set of m hyperplanes, H, which are each spanned by S, the number of incidences between S and H is bounded above by $O\left(m^{2/3}n^{d/3}+n^{d-1}\right).$ Equivalently, the number of hyperplanes in H containing k or more points is bounded above by $O\left({\frac {n^{d}}{k^{3}}}+{\frac {n^{d-1}}{k}}\right).$ A construction due to Edelsbrunner shows this bound to be asymptotically optimal.[9] József Solymosi and Terence Tao obtained near sharp upper bounds for the number of incidences between points and algebraic varieties in higher dimensions, when the points and varieties satisfy "certain pseudo-line type axioms". Their proof uses the Polynomial Ham Sandwich Theorem.[10] In $\mathbb {C} ^{2}$ Many proofs of the Szemerédi–Trotter theorem over $\mathbb {R} $ rely in a crucial way on the topology of Euclidean space, so do not extend easily to other fields. e.g. the original proof of Szemerédi and Trotter; the polynomial partitioning proof and the crossing number proof do not extend to the complex plane. Tóth successfully generalized the original proof of Szemerédi and Trotter to the complex plane $\mathbb {C} ^{2}$ by introducing additional ideas.[11] This result was also obtained independently and through a different method by Zahl.[12] The implicit constant in the bound is not the same in the complex numbers: in Tóth's proof the constant can be taken to be $10^{60}$; the constant is not explicit in Zahl's proof. When the point set is a Cartesian product, Solymosi and Tardos show that the Szemerédi-Trotter bound holds using a much simpler argument.[13] In finite fields Let $\mathbb {F} $ be a field. A Szemerédi-Trotter bound is impossible in general due to the following example, stated here in $\mathbb {F} _{p}$: let ${\mathcal {P}}=\mathbb {F} _{p}\times \mathbb {F} _{p}$ be the set of all $p^{2}$ points and let ${\mathcal {L}}$ be the set of all $p^{2}$ lines in the plane. Since each line contains $p$ points, there are $p^{3}$ incidences. On the other hand, a Szemerédi-Trotter bound would give $O((p^{2})^{2/3}(p^{2})^{2/3}+p^{2})=O(p^{8/3})$ incidences. This example shows that the trivial, combinatorial incidence bound is tight. Bourgain, Katz and Tao[14] show that if this example is excluded, then an incidence bound that is an improvement on the trivial bound can be attained. Incidence bounds over finite fields are of two types: (i) when at least one of the set of points or lines is `large' in terms of the characteristic of the field; (ii) both the set of points and the set of lines are `small' in terms of the characteristic. Large set incidence bounds Let $q$ be an odd prime power. Then Vinh[15] showed that the number of incidences between $n$ points and $m$ lines in $\mathbb {F} _{q}^{2}$ is at most ${\frac {nm}{q}}+{\sqrt {qnm}}.$ Note that there is no implicit constant in this bound. Small set incidence bounds Let $\mathbb {F} $ be a field of characteristic $p\neq 2$. Stevens and de Zeeuw[16] show that the number of incidences between $n$ points and $m$ lines in $\mathbb {F} ^{2}$ is $O\left(m^{11/15}n^{11/15}\right)$ under the condition $m^{-2}n^{13}\leq p^{15}$ in positive characteristic. (In a field of characteristic zero, this condition is not necessary.) This bound is better than the trivial incidence estimate when $m^{7/8}<n<m^{8/7}$. If the point set is a Cartesian Product, then they show an improved incidence bound: let ${\mathcal {P}}=A\times B\subseteq \mathbb {F} ^{2}$ be a finite set of points with $|A|\leq |B|$ and let ${\mathcal {L}}$ be a set of lines in the plane. Suppose that $|A||B|^{2}\leq |{\mathcal {L}}|^{3}$ and in positive characteristic that $|A||{\mathcal {L}}|\leq p^{2}$. Then the number of incidences between ${\mathcal {P}}$ and ${\mathcal {L}}$ is $O\left(|A|^{3/4}|B|^{1/2}|{\mathcal {L}}|^{3/4}+|{\mathcal {L}}|\right).$ This bound is optimal. Note that by point-line duality in the plane, this incidence bound can be rephrased for an arbitrary point set and a set of lines having a Cartesian product structure. In both the reals and arbitrary fields, Rudnev and Shkredov[17] show an incidence bound for when both the point set and the line set has a Cartesian product structure. This is sometimes better than the above bounds. References 1. Pach, János; Radoičić, Radoš; Tardos, Gábor; Tóth, Géza (2006). "Improving the Crossing Lemma by Finding More Crossings in Sparse Graphs". Discrete & Computational Geometry. 36 (4): 527–552. doi:10.1007/s00454-006-1264-9. 2. Ackerman, Eyal (December 2019). "On topological graphs with at most four crossings per edge". Computational Geometry. 85: 101574. arXiv:1509.01932. doi:10.1016/j.comgeo.2019.101574. ISSN 0925-7721. S2CID 16847443. 3. Pach, János; Tóth, Géza (1997). "Graphs drawn with few crossings per edge". Combinatorica. 17 (3): 427–439. CiteSeerX 10.1.1.47.4690. doi:10.1007/BF01215922. S2CID 20480170. 4. Szemerédi, Endre; Trotter, William T. (1983). "Extremal problems in discrete geometry". Combinatorica. 3 (3–4): 381–392. doi:10.1007/BF02579194. MR 0729791. S2CID 1750834. 5. Szemerédi, Endre; Trotter, William T. (1983). "A Combinatorial Distinction Between the Euclidean and Projective Planes" (PDF). European Journal of Combinatorics. 4 (4): 385–394. doi:10.1016/S0195-6698(83)80036-5. 6. Székely, László A. (1997). "Crossing numbers and hard Erdős problems in discrete geometry". Combinatorics, Probability and Computing. 6 (3): 353–358. CiteSeerX 10.1.1.125.1484. doi:10.1017/S0963548397002976. MR 1464571. S2CID 36602807. 7. Terence Tao (March 17, 2011). "An incidence theorem in higher dimensions". Retrieved August 26, 2012. 8. Agarwal, Pankaj; Aronov, Boris (1992). "Counting facets and incidences". Discrete & Computational Geometry. 7 (1): 359–369. doi:10.1007/BF02187848. 9. Edelsbrunner, Herbert (1987). "6.5 Lower bounds for many cells". Algorithms in Combinatorial Geometry. Springer-Verlag. ISBN 978-3-540-13722-1. 10. Solymosi, József; Tao, Terence (September 2012). "An incidence theorem in higher dimensions". Discrete & Computational Geometry. 48 (2): 255–280. arXiv:1103.2926. doi:10.1007/s00454-012-9420-x. MR 2946447. S2CID 17830766. 11. Tóth, Csaba D. (2015). "The Szemerédi-Trotter Theorem in the Complex Plane". Combinatorica. 35 (1): 95–126. arXiv:math/0305283. doi:10.1007/s00493-014-2686-2. S2CID 13237229. 12. Zahl, Joshua (2015). "A Szemerédi-Trotter Type Theorem in ℝ4". Discrete & Computational Geometry. 54 (3): 513–572. arXiv:1203.4600. doi:10.1007/s00454-015-9717-7. S2CID 16610999. 13. Solymosi, Jozsef; Tardos, Gabor (2007). "On the number of k-rich transformations". Proceedings of the twenty-third annual symposium on Computational geometry - SCG '07. SCG '07. New York, New York, USA: ACM Press. pp. 227–231. doi:10.1145/1247069.1247111. ISBN 978-1-59593-705-6. S2CID 15928844. 14. Bourgain, Jean; Katz, Nets; Tao, Terence (2004-02-01). "A sum-product estimate in finite fields, and applications". Geometric and Functional Analysis. 14 (1): 27–57. arXiv:math/0301343. doi:10.1007/s00039-004-0451-1. ISSN 1016-443X. S2CID 14097626. 15. Vinh, Le Anh (November 2011). "The Szemerédi–Trotter type theorem and the sum-product estimate in finite fields". European Journal of Combinatorics. 32 (8): 1177–1181. arXiv:0711.4427. doi:10.1016/j.ejc.2011.06.008. ISSN 0195-6698. S2CID 1956316. 16. Stevens, Sophie; de Zeeuw, Frank (2017-08-03). "An improved point-line incidence bound over arbitrary fields". Bulletin of the London Mathematical Society. 49 (5): 842–858. arXiv:1609.06284. doi:10.1112/blms.12077. ISSN 0024-6093. S2CID 119635655. 17. Rudnev, Misha; Shkredov, Ilya D. (July 2022). "On the growth rate in SL_2(F_p), the affine group and sum-product type implications". Mathematika. 68 (3): 738–783. arXiv:1812.01671. doi:10.1112/mtk.12120. S2CID 248710290. Incidence structures Representation • Incidence matrix • Incidence graph Fields • Combinatorics • Block design • Steiner system • Geometry • Incidence • Projective plane • Graph theory • Hypergraph • Statistics • Blocking Configurations • Complete quadrangle • Fano plane • Möbius–Kantor configuration • Pappus configuration • Hesse configuration • Desargues configuration • Reye configuration • Schläfli double six • Cremona–Richmond configuration • Kummer configuration • Grünbaum–Rigby configuration • Klein configuration • Dual Theorems • Sylvester–Gallai theorem • De Bruijn–Erdős theorem • Szemerédi–Trotter theorem • Beck's theorem • Bruck–Ryser–Chowla theorem Applications • Design of experiments • Kirkman's schoolgirl problem
Wikipedia
Lock in feedback in sequential experiments Maurits Kaptein, Davide Iannuzzi Department of Methodology and Statistics Research output: Working paper › Other research output We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$. Taking inspiration from physics and engineering, we have designed a new method to address this problem. In this paper, we first introduce the method in continuous time, and then present two algorithms for use in sequential experiments. Through a series of simulation studies, we show that the method is effective for finding maxima of unknown functions by experimentation, even when the maximum of the functions drifts or when the signal to noise ratio is low. Continuous Time Simulation Study Kaptein, M., & Iannuzzi, D. (2015). Lock in feedback in sequential experiments. (arXiv). arXiv.org. Kaptein, Maurits ; Iannuzzi, Davide. / Lock in feedback in sequential experiments. arXiv.org, 2015. (arXiv). @techreport{7eda32f159ea4c17bae4e9b6a16f1731, title = "Lock in feedback in sequential experiments", abstract = "We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$. Taking inspiration from physics and engineering, we have designed a new method to address this problem. In this paper, we first introduce the method in continuous time, and then present two algorithms for use in sequential experiments. Through a series of simulation studies, we show that the method is effective for finding maxima of unknown functions by experimentation, even when the maximum of the functions drifts or when the signal to noise ratio is low.", keywords = "cs.LG", author = "Maurits Kaptein and Davide Iannuzzi", note = "20 Pages, 7 Figures", series = "arXiv", publisher = "arXiv.org", institution = "arXiv.org", Kaptein, M & Iannuzzi, D 2015 'Lock in feedback in sequential experiments' arXiv, arXiv.org. Lock in feedback in sequential experiments. / Kaptein, Maurits; Iannuzzi, Davide. arXiv.org, 2015. (arXiv). T1 - Lock in feedback in sequential experiments AU - Kaptein, Maurits AU - Iannuzzi, Davide N1 - 20 Pages, 7 Figures N2 - We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$. Taking inspiration from physics and engineering, we have designed a new method to address this problem. In this paper, we first introduce the method in continuous time, and then present two algorithms for use in sequential experiments. Through a series of simulation studies, we show that the method is effective for finding maxima of unknown functions by experimentation, even when the maximum of the functions drifts or when the signal to noise ratio is low. AB - We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$. Taking inspiration from physics and engineering, we have designed a new method to address this problem. In this paper, we first introduce the method in continuous time, and then present two algorithms for use in sequential experiments. Through a series of simulation studies, we show that the method is effective for finding maxima of unknown functions by experimentation, even when the maximum of the functions drifts or when the signal to noise ratio is low. KW - cs.LG T3 - arXiv BT - Lock in feedback in sequential experiments PB - arXiv.org Kaptein M, Iannuzzi D. Lock in feedback in sequential experiments. arXiv.org. 2015 Feb 2. (arXiv). MTO Kaptein lock in feedback Pre-print arXiv open access 2015Submitted manuscript, 2.41 MB
CommonCrawl
Abstract : Let $k\ge 2$ be an integer and $T_1,\ldots, T_k$ be spanning trees of a graph $G$. If for any pair of vertices $(u,v)$ of $V(G)$, the paths from $u$ to $v$ in each $T_i$, $1\le i\le k$, do not contain common edges and common vertices, except the vertices $u$ and $v$, then $T_1,\ldots, T_k$ are completely independent spanning trees in $G$. For $2k$-regular graphs which are $2k$-connected, such as the Cartesian product of a complete graph of order $2k-1$ and a cycle and some Cartesian products of three cycles (for $k=3$), the maximum number of completely independent spanning trees contained in these graphs is determined and it turns out that this maximum is not always $k$. Keywords : Completely independent spanning tree Spanning tree Cartesian product Completely independent spanning tree.
CommonCrawl
\begin{document} \title[Poincar\'e Theory for $\mathbb{A}/\mathbb{Q}$]{Poincar\'e Theory for the Ad\`ele Class Group $\mathbb{A}/\mathbb{Q}$ and Compact Abelian one dimensional Solenoidal Groups} \author[M. Cruz -- L\'opez and A. Verjovsky]{Manuel Cruz -- L\'opez$^*$ and Alberto Verjovsky$^{**}$} \address{$*$ Departamento de Matem\'aticas, Universidad de Guanajuato, Jalisco s/n, Mineral de Valenciana, Guanajuato, Gto. 36240 M\'exico.} \email{[email protected]} \address{$**$ Instituto de Matem\'aticas, Unidad Cuernavaca, Universidad Nacional Aut\'onoma de M\'exico, Apdo. Postal 2 C.P. 2000, Cuernavaca, Mor. M\'exico} \email{[email protected]} \subjclass[2000]{Primary: 22XX, 37XX, Secondary: 22Cxx, 37Bxx} \keywords{compact abelian group, solenoidal group, Poincar\'e rotation} \begin{abstract} This article presents a generalization of the notion of \emph{Poincar\'e rotation set} to homeomorphisms of the ad\`ele class group $\mathbb{A}/\mathbb{Q}$ of the rational numbers $\mathbb{Q}$, which is a connected compact abelian group which can be identified with the one-dimensional universal solenoid $\mathsf{S}$, \emph{i.e.\,} the algebraic universal covering of the circle. The definition is first introduced in general for homeomorphisms of $\mathsf{S}$ which are isotopic to a translation, and then specializing in homeomorphisms of $\mathsf{S}$ isotopic to the identity, in which case the rotation set is a closed interval contained in the base leaf (the connected component of the identity). If in the latter case the rotation interval reduces to a single element $\rho$ and $\rho$ is irrational (\emph{i.e.\,} it is a monothetic generator of $\mathsf{S}$), we show that the homeomorphism is semiconjugate to the translation $z\mapsto\rho{z}$, like in the classical theory of Poincar\'e. This theory is valid for any general compact abelian one dimensional solenoidal group $\mathsf{S}_G$, which are Pontryagin duals of dense subgroups $G$ of the rational numbers with the discrete topology. These solenoidal groups are one-dimensional laminations which are locally homeomorphic to the product of a Cantor set by an interval so they behave very much like a ``diffuse'' version of the circle. Our approach differs from others because we use Pontryagin duality of compact abelian groups to define the rotation sets. \end{abstract} \maketitle \section[Introduction]{Introduction} \label{introduction} In 1885, H. Poincar\'e (see \cite{Poi}) introduced an invariant of topological conjugation for homeomorphisms of the unit circle which are isotopic to the identity: \[ \rho:\mathrm{Homeo}_+(\mathbb{S}^1)\longrightarrow \mathbb{S}^1, \quad f\longmapsto \rho(f), \] called the \textsf{rotation number} of $f$. He then proved a remarkable topological classification theorem for the dynamics of any orientation preserving homeomorphism $f\in \mathrm{Homeo}_+(\mathbb{S}^1)$: $f$ has a periodic orbit if and only $\rho(f)=e^{2\pi{i}\frac{p}{q}}, p,q\in\mathbb{Z}$ (\emph{i.e.\,} the rotation number is rational). If the rotation number is of the form $\rho(f)=e^{2\pi{i}\alpha}$ with $\alpha$ irrational then $f$ is semiconjugate to the irrational rotation $R_{\rho(f)}\,$, $z\mapsto{\rho(f)z}$. The semiconjugacy is actually a conjugacy if the orbits of $f$ are dense. This work was continued by A. Denjoy in 1932 (see \cite{Den}) who, among other things, showed that if $f$ is a diffeomorphism with irrational rotation number and its derivative has bounded variation then $f$ is conjugated to the rotation $R_{\rho(f)}$. In $1965$, V.I. Arnold (see \cite{Arn1}) solved the conjugation problem when the diffeomorphism is real analytic and close to a rotation, by introducing a Diophantine condition. An important issue is the existence of a differentiable conjugacy where M. Herman made so many important contributions (see \cite{Her}). Further developments of this theory have been one of the most fruitful subjects in dynamical systems, as shown by the works of A.N. Kolmogorov, V.I. Arnold, J. Moser, M.R. Herman, A.D. Brjuno, J.C. Yoccoz, among others (see \cite{Kol},\cite{Arn2},\cite{Mos},\cite{Brj1,Brj2}, \cite{Yoc}; see also \cite{Ghys}, \cite{Her}, \cite{Nav}). In general, for compact connected abelian groups there is a set called the \emph{rotation set} attached to a homeomorphism (see for instance, \cite{MZ} and \cite{Kwa}). For the case studied here of homeomorphisms isotopic to the identity of one dimensional compact solenoidal groups, this rotation set is an interval. This article generalizes the Poincar\'e theory to any compact abelian one dimensional solenoidal group $\mathsf{S}_G$, in the case when the \emph{rotation set consists of one point and this element is a monothetic generator of $\mathsf{S}_G$}. The theory is first developed for the general case of homeomorphisms of $\mathsf{S}_G$ which are isotopic to a translation by an element not in the base leaf. Translations by the zero element of the group leads to homeomorphisms isotopic to the identity which is precisely reminiscent of the classical theory. In this context, the semiconjugation theorem is proved. Compact abelian one dimensional solenoidal groups $\mathsf{S}_G$ are obtained as continuous homomorphic images of the algebraic universal covering space of the circle \[ \displaystyle{\mathsf{S} := \varprojlim_{n\in \mathbb{N}} \mathbb R/n\mathbb{Z}} \] or, by Pontryagin duality, as compact abelian topological groups whose groups of characters are additive subgroups of the rational numbers with the discrete topology. In the case of the one dimensional universal solenoidal group $\mathsf{S}$, the group of characters is the whole discrete group $\mathbb{Q}$. The group $\mathsf{S}$ can be thought of as a generalized circle and it is isomorphic as a topological group to the \textsf{ad\`ele class group} of the rational numbers $\mathbb{A}/\mathbb{Q}$, which is the orbit space of the locally trivial $\mathbb{Q}$ -- bundle structure $\mathbb{Q}\hookrightarrow \mathbb{A} \longrightarrow \mathbb{A}/\mathbb{Q}$, where $\mathbb{A}$ is the ad\`ele group of the rational numbers and $\mathbb{Q}\hookrightarrow \mathbb{A}$ is a discrete cocompact subgroup of $\mathbb{A}$. The ad\`ele class group is a fundamental arithmetic object in mathematics whose multiplicative part was invented by C. Chevalley for the purposes of simplifying and clarifying class field theory. This compact abelian group plays an essential role in the thesis of J. Tate (see \cite{Tat}) which laid the foundations for the Langlands program. By definition, $\mathsf{S}$ is a compact abelian topological group with a locally trivial $\widehat{\Z}$ -- bundle structure $\widehat{\Z}\hookrightarrow \mathsf{S}\longrightarrow \mathbb{S}^1$ and also a one dimensional foliated space whose leaves have a canonical affine structure isomorphic to the real one dimensional affine space $\mathbf{A}^1$. Here, $\displaystyle{\widehat{\Z} := \varprojlim_{n\in \mathbb{N}} \mathbb{Z}/n\mathbb{Z}}$ is the profinite completion of the integers, and it is an abelian Cantor group. Thus, topologically, $\mathsf{S}$ is a compact and connected locally trivial fibration over the circle with fiber the Cantor set. More general objects are the so called solenoidal manifolds, which were introduced by Dennis Sullivan (see \cite{Sul} and \cite{Ver}). These solenoidal manifolds are Polish spaces with the property that each point has a neighborhood which is homeomorphic to an open interval times a Cantor set. He shows that any compact one dimensional \emph{orientable} solenoidal manifold is the suspension of a homeomorphism of the Cantor set. Examples of one dimensional solenoidal manifolds are one dimensional tiling spaces and one dimensional quasicrystals like the ones studied by R.F. Williams and L. Sadun, and also by J. Aliste -- Prieto (see \cite{WS} and \cite{Ali}). \begin{remark} In principle, Poincar\'e theory might be described for general compact, orientable one dimensional solenoidal manifolds. What makes the difference in our case is the fact that we can apply to these groups the Pontryagin duality and the classical theory of harmonic analysis for compact and locally compact Abelian groups. \end{remark} In the development of the article, the theory is first described for the ad\`ele class group of the rational numbers $\mathsf{S}$, since this is the paradigmatic example and all the ideas are already present there. This is done by using the notion of asymptotic cycles of S. Schwartzman (see \cite{Sch}). So the analysis is first done for the case of homeomorphisms of solenoids which are isotopic to the identity. After that it is treated the more general case of homeomorphisms isotopic to translations with the translation element not in the base leaf. We now briefly describe the definition of the rotation set and state the main theorem (see Sections \ref{rotation_set} and \ref{Poincare_theory}). Suppose that $f:\mathsf{S}\longrightarrow \mathsf{S}$ is any homeomorphism isotopic to the identity which can be written as $f=\mathrm{id} + \varphi$, where $\varphi:\mathsf{S}\longrightarrow \mathsf{S}$ is the displacement function along the one dimensional leaves of $\mathsf{S}$ with respect to the affine structure. The suspension of $f$ is the compact space \[ \Sigma_f(\Ss) := \mathsf{S}\times [0,1] /(z,1)\sim (f(z),0). \] Since $f$ is isotopic to the identity, it follows that $\Sigma_f(\Ss)$ is homeomorphic to the product space $\mathsf{S}\times \mathbb{S}^1$ which is a compact abelian topological group. The space $\Sigma_f(\Ss)$ has a natural compact abelian group structure described very detailed in Section \ref{rotation_translation}. For the sake of simplicity, identify the product group structure of $\mathsf{S} \times \mathbb{S}^1$ with the group structure on $\Sigma_f(\Ss)$. So the character group of the suspension of $f$ is $$ \mathrm{Char}(\Sigma_f(\Ss))\cong \mathrm{Char}(\mathsf{S}\times \mathbb{S}^1)\cong \mathrm{Char}(\mathsf{S})\times \mathrm{Char}(\mathbb{S}^1)\cong \mathbb{Q}\times \mathbb{Z}. $$ The associated suspension flow $\phi_t:\Sigma_f(\Ss)\longrightarrow\Sigma_f(\Ss)$ is given by: \[ \phi_t(z,s) = (f^m(z),t+s-m),\qquad (m\leq t+s < m+1). \] Now, for any given character $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$, there exists a unique 1 -- cocycle \[ C_{\chi_{q,n}}:\mathbb R\times \Sigma_f(\Ss)\longrightarrow \mathbb R \] associated to $\chi_{q,n}$ (see Section \ref{rotation_set} for complete information) such that \[ \chi_{q,n}(\phi_t(z,s)) = \exp(2\pi iC_{\chi_{q,n}}(t,(z,s)))\cdot \chi_{q,n}(z,s), \] for every $(z,s)\in \Sigma_f(\Ss)$ and $t\in \mathbb R$. Using the definition of $\chi_{q,n}$ and $\phi_t$ and comparing terms in the last equality one obtains an explicit expression for the 1 -- cocycle $C_{\chi_{q,n}}(t,(z,s))$. By Birkhoff's ergodic theorem, there is a well defined homomorphism \[ H_f:\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb R \] given by \[ H_f(\chi_{q,n}) = \int_{\Sigma_f(\Ss)} C_{\chi_{q,n}}(1,(z,s)) d\nu, \] where $\nu$ is a $\phi_t$--invariant Borel probability measure on $\Sigma_f(\Ss)$. The explicit calculation of the 1 -- cocycle implies that the last integral only depends on an $f$ -- invariant Borel probability measure $\mu$. So, if $\mathcal{P}_f(\mathsf{S})$ is the weak$^*$ compact convex space consisting of all such measure on $\mathsf{S}$, the well defined continuous homomorphism \[ \rho_\mu(f):\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb{S}^1 \] given by \[ \rho_\mu(f)(\chi_{q,n}) := \exp(2\pi iH_f(\chi_{q,n})) \] determines an element in $\mathrm{Char}(\mathrm{Char}(\Sigma_f(\Ss)))\cong \mathsf{S}\times \mathbb{S}^1$ not depending on the second component. By Pontryagin's duality theorem, it defines an element $\rho_\mu(f)\in \mathsf{S}$ called the rotation element associated to $f$, which is the generalized Poincar\'e rotation number. As expected, $\rho_\mu(f)$ is an element in the solenoid itself and it measures, in some sense, the average displacement of points under iteration of $f$ along the one dimensional leaves with the Euclidean metric. If $\rho:\mathcal{P}_f(\mathsf{S})\longrightarrow \mathsf{S}$ is the map given by $\mu\longmapsto \rho_\mu(f)$, then $\rho$ is continuous from $\mathcal{P}_f(\mathsf{S})$ to $\mathsf{S}$. Since $f$ is isotopic to the identity, the image $\rho(\mathcal{P}_f(\mathsf{S}))$ is a compact subset of $\mathsf{S}$. The pathwise component of $\mathsf{S}$ are one dimensional leaves, so the set $\rho(\mathcal{P}_f(\mathsf{S}))$ is a compact interval $I_f$, which, up to a translation, is contained in the one parameter subgroup $\mathcal{L}_0$. Now, $\mathcal{L}_0$ is canonically isomorphic to $\mathbb R$, so it is possible to identify $I_f$ with an interval in the real line. In particular, if $f$ is uniquely ergodic, then the interval $I_f$ reduces to a point and this case. \\ \noindent \textbf{Definition \ref{pseudoirrational_rotation}} The homeomorphisms $f$ is a \textsf{pseudoirrational rotation} if $I_f$ consists of a single point $I_f=\{\alpha\}$. In this case, and only in this case, we call $\alpha$ the \emph{rotation element} of $f$. \\ In order to state the main result, some definitions are required. \\ First observe that, since $\mathsf{S}$ is torsion free, it follows that there does not exist a notion of ``rational'' and so it is only required to give a suitable definition of what ``irrational'' is. This goes as follows: \\ \noindent \textsf{Definition \ref{irrational_element}} The $\alpha\in \mathsf{S}$ is \textsf{irrational} if $\{n\alpha:n\in \mathbb{Z}\}$ is dense in $\mathsf{S}$. In classical terminology, $\mathsf{S}$ is said to be \textsf{monothetic} with generator $\alpha$. \\ Fixing a measure $\mu\in \mathcal{P}_f(\mathsf{S})$ determines a rotation element $\rho_\mu(f)$ of $f$, simply denoted by $\rho(f)$. If $F$ is any lift of $f$ of the form \[ F(t,k)=\left( F_k(t),k \right)\qquad (t,k)\in \mathbb R\times \widehat{\Z}, \] where $F_k:\mathbb R\longrightarrow \mathbb R$ is a homeomorphism which depends continuously on $k\in \widehat{\Z}$, the following definition seems appropriate (see Section \ref{Poincare_theory} for details). \\ \noindent \textbf{Definition \ref{bounded_meanvariation}} The homeomorphism $f$ has \textsf{bounded mean variation} if there exists $C>0$ such that the sequence $\{F_k^n(t) - t - n\tau(F)\}_{n\geq 1}$ is uniformly bounded by $C$, i.e. $C$ is independent of $(t,k)$. Here, $F_k^n$ is any lift of $f^n$ and $\tau(F)$ is a lifting of $\rho(f)$ to $\mathbb R\times \widehat{\Z}$ contained in the same leaf as $(t,k)$. \\ The generalized semiconjugacy theorem can be stated as follows: \\ \noindent \textbf{Theorem \ref{Poincare_theorem}} If $f\in \mathrm{Homeo}_+(\mathsf{S})$ is a pseudoirrational rotation with irrational rotation element $\rho(f)$, then $f$ is semiconjugated to the irrational rotation $R_{\rho(f)}$ if and only if $f$ has bounded mean variation. \\ The conjugacy question remains: \\ \noindent \textbf{Question \ref{question_conjugacy}} Under the same hypothesis of this theorem, is $f$ conjugated to the rotation $R_{\rho(f)}$ when $f$ is minimal? \\ The answer to this question and more dynamical results is the subject of recent investigation (see \cite{CV}). \\ For general one dimensional compact abelian solenoidal group $\mathsf{S}_G$, this theorem can be stated as:\\ \noindent \textbf{Theorem \ref{solenoidal_Poincare-theorem}} Suppose that $f:\mathsf{S}_G\longrightarrow \mathsf{S}_G$ is any homeomorphism isotopic to the identity, or isotopic to a rotation by an element not in the base leaf, with irrational rotation element $\rho(f)$. $f$ is semiconjugated to the irrational rotation $R_{\rho(f)}$ if and only if $f$ has bounded mean variation. \\ Closed related to this work is the article by J. Kwapisz (see \cite{Kwa}) who gives a definition of a rotation element for homeomorphisms of the real line with almost periodic displacement. When the displacement is limit periodic, the corresponding convex hull is a compact abelian one dimensional solenoidal group. However, we consider $\mathsf{S}$ as a ``generalized circle'' (i.e. a compact abelian one dimensional pro -- Lie group) and we develop the theory from this perspective. Other similar studies of the Poincar\'e theory have been developed very recently by several authors. In the article \cite{Jag}, T. J\"ager proved that a minimal homeomorphism of the $d$ -- dimensional torus is semiconjugated to an irrational rotation if and only if it is a pseudoirrational rotation with bounded mean motion (see also \cite{AJ} and \cite{BCJL}). The article is organized as follows. Section \ref{universal_solenoid} defines the algebraic universal covering space of the circle, its character group, the suspension of a homeomorphism isotopic to the identity and its corresponding character group. Section \ref{rotation_set} introduces the notion of 1--cocycle and gives the definition of the generalized rotation set and element. In order to define this generalized rotation element $\rho(f)$ it is necessary to use the following ingredients: Pontryagin's duality theory for compact Abelian groups, the Bruschlinsky--Eilenberg homology theory and Schwartzman theory of asymptotic cycles as well as the notion of a 1--cocycle and ergodic theory. The semiconjugacy theorem is proved in Section \ref{Poincare_theory}. Finally, Section \ref{rotation_translation} introduces a general definition for the case of homeomorphisms isotopic to translations whose rotation element is not in the base leaf and for homeomorphisms of general compact abelian one dimensional solenoidal groups. \section[The algebraic universal covering space of the circle]{The algebraic universal covering space of the circle} \label{universal_solenoid} This Section introduces the algebraic universal covering space of the circle, its character group, the suspension of a homeomorphism isotopic to the identity and its corresponding character group. \subsection[The universal one dimensional solenoid]{The universal one dimensional solenoid} It is well known, by covering space theory, that for any integer $n\geq 1$, it is defined the unbranched covering space of degree $n$, $p_n:\mathbb{S}^1 \longrightarrow \mathbb{S}^1$ given by $z\longmapsto {z^n}$. If $n,m\in \mathbb{Z}^+$ and $n$ divides $m$ then there exists a unique covering map $p_{nm}:\mathbb{S}^1\longrightarrow \mathbb{S}^1$ such that $p_n \circ p_{nm} = p_m$. This determines a projective system of covering spaces $\{\mathbb{S}^1,p_n\}_{n\geq 1}$ whose projective limit is the \textsf{universal one dimensional solenoid} \[ \mathsf{S} := \varprojlim_{n\in \mathbb{N}} \{\mathbb{S}^1,p_n\} \] with canonical projection $\mathsf{S}\longrightarrow \mathbb{S}^1$ determined by projection onto the first coordinate, with a locally trivial $\widehat{\Z}$--bundle structure $\widehat{\Z}\hookrightarrow \mathsf{S} \longrightarrow\mathbb{S}^1$. Here, $\displaystyle{\widehat{\Z} := \varprojlim_{n\in \mathbb{N}} \mathbb{Z}/n\mathbb{Z}}$ is the profinite completion of $\mathbb{Z}$, which is a compact, perfect and totally disconnected Abelian topological group homeomorphic to the Cantor set. Being $\widehat{\Z}$ the profinite completion of $\mathbb{Z}$, it admits a canonical inclusion of $\mathbb{Z}$ whose image is dense. $\mathsf{S}$ can also be realized as the orbit space of the $\mathbb{Q}$--bundle structure $\mathbb{Q} \hookrightarrow \mathbb{A} \longrightarrow \mathbb{A}/\mathbb{Q}$, where $\mathbb{A}$ is the ad\`ele group of the rational numbers which is a locally compact Abelian group, $\mathbb{Q}$ is a discrete subgroup of $\mathbb{A}$ and $\mathbb{A}/\mathbb{Q} \cong \mathsf{S}$ is a compact Abelian group (see \cite{RV}). From this perspective, $\mathbb{A}/\mathbb{Q}$ can be seen as a projective limit whose $n$--th component corresponds to the unique covering space of degree $n\geq 1$ of $\mathbb{S}^1$. $\mathsf{S}$ is also called the \textsf{algebraic universal covering space} of the circle $\mathbb{S}^1$. The Galois group of the covering is $\widehat{\Z}$, the \textsf{algebraic fundamental group} of $\mathbb{S}^1$. By considering the properly discontinuously free action of $\mathbb{Z}$ on $\mathbb R\times \widehat{\Z}$ given by \[ \gamma\cdot (t,k) := (t+\gamma,k-\gamma) \quad (\gamma\in \mathbb{Z}), \] $\mathsf{S}$ is identified with the orbit space $\mathbb R\times_{\mathbb{Z}} \widehat{\Z}$. Here, $\mathbb{Z}$ is acting on $\mathbb R$ by covering transformations and on $\widehat{\Z}$ by translations. The path connected component of the identity element $0\in \mathsf{S}$ is called the \textsf{base leaf} and will be denoted by $\mathcal{L}_0$. Clearly, $\mathcal{L}_0$ is the image of $\mathbb R\times \{0\}$ under the canonical projection $\mathbb R\times \widehat{\Z}\longrightarrow \mathsf{S}$ and it is homeomorphic to $\mathbb R$. In summary, $\mathsf{S}$ is a compact, connected, Abelian topological group and also a one dimensional lamination where each ``leaf" is a simply connected one dimensional manifold, homeomorphic to the universal covering space $\mathbb R$ of $\mathbb{S}^1$, and a typical ``transversal" is isomorphic to the Cantor group $\widehat{\Z}$. $\mathsf{S}$ also has a leafwise $\mathrm{C}^\infty$ Riemannian metric (i.e., $\mathrm{C}^\infty$ along the leaves) which renders each leaf isometric to the real line with its standard metric. So, it makes sense to speak of a rigid translation along the leaves. The leaves also have a natural order equivalent to the order of the real line. \subsubsection*[Characters of $\mathsf{S}$]{Characters of $\mathsf{S}$} \label{charactersS} Denote by $\mathrm{Char}(\mathsf{S}):=\mathrm{Hom}_{\text{cont}}(\mathsf{S},\mathbb{S}^1)$ the topological group endowed with the uniform topology consisting of all continuous homomorphisms from $\mathsf{S}$ into the multiplicative group $\mathbb{S}^1$. This group is called the \textsf{Pontryagin dual} of $\mathsf{S}$ or, the \textsf{Character group} of $\mathsf{S}$. From what has been said before, $\mathsf{S}\cong \mathbb{A}/\mathbb{Q}$ and, since $\mathbb{A}$ is selfdual (i.e., $\mathbb{A}\cong \mathrm{Char}(\mathbb{A})$), it follows that $\mathrm{Char}(\mathsf{S})\cong \mathbb{Q}$. If $\check{H}^1(\mathsf{S},\mathbb{Z})$ denotes the first $\mathrm{\check{C}}$ech cohomology group of $\mathsf{S}$ with coefficients in $\mathbb{Z}$, since $\mathsf{S}$ is a compact connected Abelian group, $\check{H}^1(\mathsf{S},\mathbb{Z})\cong \mathrm{Char}(\mathsf{S})$ (See \cite{Ste}). If $\chi:\mathsf{S}\longrightarrow \mathbb{S}^1$ is any character, $\chi$ is completely determined by its values when restricted to the dense one parameter subgroup $\mathcal{L}_0$. Since $\mathcal{L}_0$ is canonically isomorphic to the additive group $(\mathbb R,+)$, the restriction of $\chi$ to $\mathcal{L}_0$ is of the form $t\longmapsto \exp(2\pi its)$. Since $\mathrm{Char}(\mathsf{S})\cong \mathbb{Q}$, $s$ must be rational. Now, given any $z\in \mathsf{S}$, there exists an element $n\in \widehat{\Z} \subset \mathsf{S}$ (which can be approximated by integers) such that $z+n \in \mathcal{L}_0$. By continuity, the value of the character $\chi$ at $z$ is \[ \chi(z) = \exp(2\pi iq(z+n)) = \exp(2\pi iqz) \] for any $q\in \mathbb{Q}$. In this situation we will write $\chi(z) = \mathrm{Exp}{(2\pi iqz)}$. \subsubsection*[Homeomorphisms of $\mathsf{S}$]{Homeomorphisms of $\mathsf{S}$} We only consider the group which consists of all homeomorphisms of $\mathsf{S}$ which are isotopic to the identity and can be written as $f = \mathrm{id} + \varphi$, where $\varphi:\mathsf{S}\longrightarrow \mathsf{S}$ is given by $\varphi(t)=f(t)-t$ and describes the displacement of points $t\in \mathsf{S}$ along the leaf containing it. The symbol ``--'' refers to the additive group operation in the solenoid. Denote the set of all such functions $\varphi$ by $\mathrm{C}_+(\mathsf{S})$. Since $f$ is a homeomorphism preserving the order in the leaves, it follows that there is a one to one correspondence between $\mathrm{C}_+(\mathsf{S})$ and the set of real valued continuous functions with the property that if $x$ and $y$ are in the same one dimensional leaf and if $t<s$, then $t+\varphi(t)<s+\varphi(s)$. Therefore, $\mathrm{C}_+(\mathsf{S})$ can be identified with the Banach space of real valued continuous functions $\mathrm{C}(\mathsf{S},\mathbb R)$. As mentioned in the Introduction, the solenoid has a leafwise $\mathrm{C}^\infty$ Riemannian metric (i.e., $\mathrm{C}^\infty$ along the leaves) which renders each leaf isometric to the real line with its standard metric. Hence, the displacement function $\varphi$ can be thought of as a continuous real valued function which we denote with the same symbol $\varphi$. In fact, since every leaf $\mathcal{L}\subset \mathsf{S}$ is dense, the restriction of this function to $\mathcal{L}$, denoted by $\varphi_{\mathcal{L}}$, completely determines the function. Furthermore, $\varphi_{\mathcal{L}}$ is an almost periodic function whose convex hull is the solenoid and thus, $\varphi_{\mathcal{L}}$ is a limit periodic function (see \cite{Pon}). \begin{remark} Denote by $\mathrm{Homeo}_+(\mathsf{S})$ the group of all homeomorphisms $f:\mathsf{S}\longrightarrow \mathsf{S}$ which are isotopic to the identity and can be written as $f = \mathrm{id} + \varphi$, with $\varphi\in \mathrm{C}_+(\mathsf{S})$; i.e., \[ \mathrm{Homeo}_+(\mathsf{S}) = \{f\in \mathrm{Homeo}(\mathsf{S}) : f = \mathrm{id} + \varphi, \; \varphi\in \mathrm{C}_+(\mathsf{S})\}. \] \end{remark} \subsection[The suspension of a homeomorphism]{The suspension of a homeomorphism} \label{suspension_homeomorphism} Let $f:\mathsf{S}\longrightarrow \mathsf{S}$ be any homeomorphism isotopic to the identity. On $\mathsf{S}\times \mathbb R$ define the $\mathbb{Z}$--action: \[ \mathbb{Z}\times (\mathsf{S}\times \mathbb R) \longrightarrow \mathsf{S}\times \mathbb R, \qquad (n,(z,t))\longmapsto (f^n(z),t+n). \] As usual, being in the same orbit defines an equivalence relation in the space $\mathsf{S}\times \mathbb R$ and if $(z,t)$ is any point in $\mathsf{S}\times \mathbb R$, denote by \[ [(z,t)] = \{ (f^n(z),t+n) : n\in \mathbb{Z} \} = \{ (z+n\alpha,t+n) : n\in \mathbb{Z} \} \] the equivalence class of the point, i.e. its $\mathbb{Z}$--orbit. The orbit space $\Sigma_f(\Ss)$ of this action is the \textsf{suspension} of $f$. Take any two points $[(z,t)]$ and $[(w,s)]$ in $\Sigma_f(\Ss)$ and define the following operation: \[ [(z,t)]\cdot [(w,s)] := [(z+w+(n+m)\alpha,t+s+(n+m))] = [(z+w,t+s)]. \] Since $f$ is isotopic to the identity, it follows that $\Sigma_f(\Ss)$ is homeomorphic to the product space $\mathsf{S}\times \mathbb{S}^1$, which is a compact Abelian topological group. In $\Sigma_f(\Ss)$ there is a well defined flow \[ \phi:\mathbb R\times \Sigma_f(\Ss)\longrightarrow \Sigma_f(\Ss) \] called the \textsf{suspension flow} of $f$, given by \[ \phi(t,(z,s)) = (f^m(z),t+s-m), \] if $m\leq t+s \leq m+1$. The canonical projection $\pi:\mathsf{S}\times [0,1]\longrightarrow \Sigma_f(\Ss)$ sends $\mathsf{S}\times \{0\}$ homeomorphically onto its image $\pi(\mathsf{S}\times \{0\})\equiv \mathsf{S}$ and every orbit of the suspension flow intersects $\mathsf{S}$. The orbit of any $(z,0)\in \Sigma_f(\Ss)$ must coincide with the orbit $\phi_t(z,0)$ at time $0\leq t\leq T$ for $T$ an integer. \subsubsection*[Characters of the suspension]{Characters of the suspension} Denote by $\mathrm{C}(\Sigma_f(\Ss),\mathbb{S}^1)$ the topological space which consists of all continuous functions defined on $\Sigma_f(\Ss)$ with values in the unit circle $\mathbb{S}^1$ with the topology of uniform convergence on compact sets (i.e., the compact and open topology). Clearly, this is an Abelian topological group under pointwise multiplication. The subset $R(\Sigma_f(\Ss),\mathbb{S}^1)\subset \mathrm{C}(\Sigma_f(\Ss),\mathbb{S}^1)$ which consists of continuous functions $h:\Sigma_f(\Ss)\longrightarrow \mathbb{S}^1$ that can be written as $h(z,s) = \exp(2\pi i\psi(z,s))$ with $\psi:\Sigma_f(\Ss)\longrightarrow \mathbb R$ a continuous function, is a closed subgroup. Hence, the quotient group $\mathrm{C}(\Sigma_f(\Ss),\mathbb{S}^1)/R(\Sigma_f(\Ss),\mathbb{S}^1)$ is a topological group. By Bruschlinsky--Eilenberg's theory (see\cite{Sch}), it is known that \[ \check{H}^1(\Sigma_f(\Ss),\mathbb{Z})\cong \mathrm{C}(\Sigma_f(\Ss),\mathbb{S}^1)/R(\Sigma_f(\Ss),\mathbb{S}^1). \] Since \[ \check{H}^1(\Sigma_f(\Ss),\mathbb{Z})\cong \mathrm{Char}(\Sigma_f(\Ss)), \] it follows that \[ \mathrm{Char}(\Sigma_f(\Ss))\cong \mathrm{C}(\Sigma_f(\Ss),\mathbb{S}^1)/R(\Sigma_f(\Ss),\mathbb{S}^1). \] On the other hand, using the algebraic structure of the product group $\mathsf{S}\times \mathbb{S}^1$, the character group of the suspension is given by \[ \mathrm{Char}(\Sigma_f(\Ss))\cong \mathrm{Char}(\mathsf{S})\times \mathrm{Char}(\mathbb{S}^1)\cong \mathbb{Q}\times \mathbb{Z}. \] According with the definition of $\mathrm{Exp}$ in Subsection \ref{charactersS}, given any element $(q,n)\in \mathbb{Q}\times \mathbb{Z}$, the corresponding character $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$ can be written as \begin{align*} \chi_{q,n}(z,s) &= \mathrm{Exp}(2\pi iqz)\cdot \exp(2\pi ins)\\ &= \mathrm{Exp}(2\pi i(qz+ns)), \end{align*} for any $(z,s)\in \Sigma_f(\Ss)$. \subsubsection*[Measures]{Measures} Given any $f$--invariant Borel probability measure $\mu$ on $\mathsf{S}$ and $\lambda$ the usual Lebesgue measure on $[0,1]$, the product measure $\mu\times \lambda$ leads to define a $\phi_t$--invariant Borel probability measure on $\Sigma_f(\Ss)$. Reciprocally, given any $\phi_t$--invariant Borel probability measure $\nu$ on $\Sigma_f(\Ss)$, it can be defined, by disintegration with respect to the fibers, an $f$--invariant Borel probability measure $\mu$ on $\mathsf{S}$. Denote by $\mathcal{P}_f(\mathsf{S})$ the weak$^*$ compact convex space of $f$--invariant Borel probability measures defined on $\mathsf{S}$. \section[The rotation set]{The rotation set} \label{rotation_set} This Section presents the notion of 1 -- cocycle and gives the definition of the generalized rotation set and element. In order to define this generalized rotation element $\rho(f)$ associated to a homeomorphism $f:\mathsf{S}\longrightarrow \mathsf{S}$ isotopic to the identity, it is necessary to use Pontryagin's duality theory for compact abelian groups, the Bruschlinsky -- Eilenberg homology theory and Schwartzman theory of asymptotic cycles as well as the notion of a 1--cocycle and ergodic theory. \subsection[1--cocycles]{1--cocycles} A 1--\textsf{cocycle} associated to the suspension flow $\phi_t$ is a continuous function \[ C:\mathbb R\times \Sigma_f(\Ss)\longrightarrow \mathbb R \] which satisfies the relation \[ C(t+u,(z,s)) = C(u,\phi_t(z,s)) + C(t,(z,s)), \] for every $t,u\in \mathbb R$ and $(z,s)\in \Sigma_f(\Ss)$. The set which consists of all 1--cocycles associated to $\phi_t$ is an Abelian group denoted by $\mathrm{C}^1(\phi)$. A 1--\textsf{coboundary} is the 1--cocyle determined by a continuous function $\psi:\Sigma_f(\Ss)\longrightarrow \mathbb R$ such that \[ C(t,(z,s)):= \psi(z,s) - \psi(\phi_t(z,s)). \] The set of 1--coboundaries $\Gamma^1(\phi)$ is a subgroup of $\mathrm{C}^1(\phi)$ and the quotient group \[ H^1(\phi):=\mathrm{C}^1(\phi)/\Gamma^1(\phi), \] is called the 1--\textsf{cohomology group} associated to $\phi_t$. The proof of the next proposition (for an arbitrary compact metric space) can be seen in \cite{Ath}. \begin{proposition} \label{associated_cocycle} For every continuous function $h:\Sigma_f(\Ss)\longrightarrow \mathbb{S}^1$ there exists a unique 1--cocycle $C_h:\mathbb R\times \Sigma_f(\Ss)\longrightarrow \mathbb R$ associated to $h$ such that \[ h(\phi_t(z,s)) = \exp(2\pi iC_h(t,(z,s)))\cdot h(z,s), \] for every $(z,s)\in \Sigma_f(\Ss)$ and $t\in \mathbb R$. \end{proposition} This proposition implies that there is a well defined homomorphism \[ \mathrm{Char}(\Sigma_f(\Ss))\cong \check{H}^1(\Sigma_f(\Ss),\mathbb{Z})\longrightarrow H^1(\phi) \] by sending any character $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$ to the cohomology class $[C_{\chi_{q,n}}]$, where $C_{\chi_{q,n}}$ is the unique 1--cocycle associated to $\chi_{q,n}$. Applying the above proposition to any nontrivial character $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$ the following relation is obtained: \[ \chi_{q,n}(\phi_t(z,s)) = \exp(2\pi iC_{\chi_{q,n}}(t,(z,s)))\cdot \chi_{q,n}(z,s). \] Using the explicit expressions for the characters on both sides of the above equation, the next equalities hold \begin{align*} \chi_{q,n}(\phi_t(z,s)) &= \chi_{q,n}(f^m(z),t+s-m)\\ &= \mathrm{Exp}(2\pi i(qf^m(z) + n(t+s-m)))\\ &= \mathrm{Exp}(2\pi i(qf^m(z) + nt + ns)) \end{align*} and \[ \chi_{q,n}(z,s) = \mathrm{Exp}(2\pi i(qz+ns)). \] Comparing these two expressions one gets \begin{equation} \label{cocycle1} C_{\chi_{q,n}}(t,(z,s)) = q(f^m(z)-z) + nt. \end{equation} Now recall that $f:\mathsf{S}\longrightarrow \mathsf{S}$ is a homeomorphism isotopic to the identity of the form $f=\mathrm{id} + \varphi$, where $\varphi:\mathsf{S}\longrightarrow \mathsf{S}$ is the displacement function, where, as described before, $\varphi$ can also be considered as a real valued function on the solenoid. If $t=1$, then $m=1$ and the 1--cocycle at time $t=1$ is \begin{equation} \label{time1cocycle} C_{\chi_{q,n}}(1,(z,s)) = q\varphi(z) + n. \end{equation} \subsection[The rotation element]{The rotation element} If $\nu$ is any $\phi_t$--invariant Borel probability measure on $\Sigma_f(\Ss)$, by Birkhoff's ergodic theorem there is a well defined homomorphism $H^1(\phi)\longrightarrow \mathbb R$ given by \[ [C_\chi]\longmapsto \int_{\Sigma_f(\Ss)} C_\chi(1,(z,s)) d\nu. \] Now, composing the two homomorphisms \[ \mathrm{Char}(\Sigma_f(\Ss))\longrightarrow H^1(\phi)\longrightarrow \mathbb R \] it is obtained a well defined homomorphism $H_{f,\nu}:\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb R$ given by \[ H_{f,\nu}(\chi_{q,n}) := \int_{\Sigma_f(\Ss)} C_{\chi_{q,n}}(1,(z,s)) d\nu. \] Denote by $\mu$ the $f$--invariant Borel probability measure on $\mathsf{S}$ obtained by disintegration of $\nu$ with respect to the fibers. Evaluating the above integral using equation (\ref{time1cocycle}) gives \begin{align*} H_{f,\nu}(\chi_{q,n}) &= \int_{\Sigma_f(\Ss)} (q\varphi + n)d\nu\\ &= q\int_\mathsf{S} \varphi d\mu + n. \end{align*} Hence $H_{f,\nu}$ determines an element in $\mathrm{Hom}(\mathrm{Char}(\Sigma_f(\Ss)),\mathbb R)$ for each measure $\nu$ in $\Sigma_f(\Ss)$, and therefore, for each measure $\mu\in \mathcal{P}_f(\mathsf{S})$. Hence one gets a well defined function \[ H_f:\mathcal{P}_f(\mathsf{S})\longrightarrow \mathrm{Hom}(\mathrm{Char}(\Sigma_f(\Ss)),\mathbb R) \] given by $\mu\longmapsto H_{f,\mu}$, where $H_{f,\mu}$ is \[ H_{f,\mu}(\chi_{q,n}) = q\int_\mathsf{S} \varphi d\mu + n. \] By composing $H_f$ with the continuous homomorphism \[ \mathrm{Hom}(\mathrm{Char}(\Sigma_f(\Ss)),\mathbb R)\longrightarrow \mathrm{Char}(\mathrm{Char}(\Sigma_f(\Ss))) \] given by \[ H_{f,\mu}\longmapsto \pi\circ H_{f,\mu}, \] where $\pi:\mathbb R\longrightarrow \mathbb{S}^1$ is the universal covering projection, we obtain a well defined continuous function $\rho:\mathcal{P}_f(\mathsf{S})\longrightarrow \mathrm{Char}(\mathrm{Char}(\Sigma_f(\Ss)))$ given by \[ \mu\longmapsto \rho_\mu := \pi\circ H_{f,\mu}. \] That is, for each $\mu\in \mathcal{P}_f(\mathsf{S})$, there exists a well defined continuous homomorphism \[ \rho_\mu:\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb{S}^1 \] given by \begin{align*} \rho_\mu(\chi_{q,n}) &:= \exp(2\pi iH_{f,\mu}(\chi_{q,n}))\\ &= \exp \left(2\pi iq \int_\mathsf{S} \varphi d\mu\right).\\ \end{align*} By Pontryagin's duality theorem, \[ \mathrm{Char}(\mathrm{Char}(\Sigma_f(\Ss)))\cong \Sigma_f(\Ss) \] and therefore $\rho_\mu\in \Sigma_f(\Ss)$. Since $\Sigma_f(\Ss)\cong \mathsf{S}\times \mathbb{S}^1$ and $\rho_\mu(\chi_{q,n})=\rho_\mu(\chi_{q,0})$, it follows that $\rho_\mu$ does not depend on the second component and so, the identification $\rho_\mu = (\rho_\mu,1)\in \mathsf{S}\times \mathbb{S}^1$ can be made. More precisely, it is well known that every nontrivial character of $\mathrm{Char}(\mathsf{S})\cong \mathbb{Q}$ is of the form $\chi_a$ for some $a\in \mathbb{A}$ and the map $\mathbb{A}\longrightarrow \mathrm{Char}(\mathbb{Q})$ given by $a\longmapsto \chi_a$ induces an isomorphism $\mathrm{Char}(\mathbb{Q})\cong \mathbb{A}/\mathbb{Q}\cong \mathsf{S}$. This produces a genuine element $\rho_\mu\in \mathsf{S}$. \begin{definition} The element $\rho_\mu(f) := \rho_\mu \in \mathsf{S}$ defined above is the \emph{\textsf{rotation element}} associated to $f$ with respect to the measure $\mu$. \end{definition} \begin{remark} By definition, $\rho_\mu(f)$ can be identified with the element $\int_\mathsf{S} \varphi d\mu$ in the solenoid $\mathsf{S}$ determined by the character of $\mathbb{Q}$ given by \[ q\longmapsto \mathrm{Exp} \left(2\pi iq\int_\mathsf{S} \varphi d\mu \right). \] That is, $\rho_\mu(f)$ is \emph{solenoid -- valued}. \end{remark} If $\rho:\mathcal{P}_f(\mathsf{S})\longrightarrow \mathsf{S}$ is the map given by $\mu\longmapsto \rho_\mu(f)$, then $\rho$ is continuous from $\mathcal{P}_f(\mathsf{S})$ to $\mathsf{S}$. Since $\mathcal{P}_f(\mathsf{S})$ is compact and convex, and $f$ is isotopic to the identity, the image $\rho(\mathcal{P}_f(\mathsf{S}))$ is a compact subset of $\mathsf{S}$. \begin{definition} \label{rotationset} The \emph{\textsf{rotation set}} of $f$ is $\rho(\mathcal{P}_f(\mathsf{S}))$. \end{definition} Since the pathwise component of $\mathsf{S}$ are one dimensional leaves, this set $\rho(\mathcal{P}_f(\mathsf{S}))$ is a compact interval $I_f$, which, up to a translation, is contained in the one parameter subgroup $\mathcal{L}_0$. Since $\mathcal{L}_0$ is canonically isomorphic to $\mathbb R$, it is possible to identify $I_f$ with an interval in the real line. In particular, if $f$ is uniquely ergodic, then the interval $I_f$ reduces to a point and the rotation element is a unique element of $\mathsf{S}$. \begin{remark} The rotation element of $f$ can be interpreted as the exponential of an asymptotic cycle, in the sense of Schwartzman, of the suspension flow $\{\phi_t\}_{t\in\mathbb R}$ of $f$ (see \cite{Sch}; see also \cite{AK}, \cite{Pol}). If $A_\nu\in \mathrm{Hom}(\check{H}^1(\Sigma_f(\Ss),\mathbb{Z}),\mathbb R)=\mathrm{Hom}(\mathrm{Char}(\Sigma_f(\Ss)),\mathbb R)$ denotes the asymptotic cycle associated to the $\{\phi_t\}_{t\in\mathbb R}$--invariant measure $\nu$, then $\rho_\nu(f)=\exp(2\pi iA_\nu)$. \end{remark} \begin{remark} From Birkhoff's ergodic theorem, for any ergodic $f$--invariant measure $\mu$, $$ \int_\mathsf{S} \varphi d\mu = \underset{n\to\infty}\lim \ \frac{1}{n} \sum_{j=0}^n \varphi(f^j(z)), $$ for $\mu$--almost every point $z\in\mathsf{S}$. We could have used this to define the rotation element with respect to an (ergodic) measure. Since we wanted to make explicit the role of the measure, we used the theory of asymptotic cycles in the sense of Schwartzman. (Compare \cite{Kwa}, Theorem 3.) \end{remark} \subsection[Basic example and properties]{Basic example and properties} \subsubsection*[Basic example: Rotations]{Basic example: Rotations} Let $\alpha$ be any element in $\mathcal{L}_0\subset \mathsf{S}$ and consider the rotation $R_\alpha:\mathsf{S}\longrightarrow \mathsf{S}$ given by $z\longmapsto z + \alpha$. The suspension flow $\phi_t:\mathsf{S}\times \mathbb{S}^1\longrightarrow \mathsf{S}\times \mathbb{S}^1$ is given by \[ \phi_t(z,s) = (z+m\alpha,t+s-m), \] if $m\leq t+s< m+1$. If $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$ is any nontrivial character then \[ H_{R_\alpha,\mu}(\chi_{q,n}) = q\int_\mathsf{S} \alpha d\mu + n = q\alpha + n. \] This implies that \[ \rho_\mu(R_\alpha)(\chi_{q,n}) = \exp(2\pi i(q\alpha + n)) = \exp(2\pi iq\alpha) \] and $\rho_\mu(R_\alpha) = \alpha$. \subsubsection*[Properties]{Properties} \begin{enumerate} \item \textsf{(Invariance under conjugation)} Let $f$ and $g$ be any two homeomorphisms isotopic to the identity and $h=\mathrm{id}+\psi$. If $h\circ f=g\circ h$ then $\rho_\mu(f)=\rho_\mu(g)$. In particular, if $f$ is conjugated to a rotation $R_\alpha$ then $\rho_\mu(f)=\alpha$. \begin{proof} Observe first that $h\circ f=g\circ h$ implies that $h\circ f^m=g^m\circ h$ and $f^m + \psi\circ f^m=g^m\circ h - h + h$. That is \[ f^m-\mathrm{id} = (g^m-\mathrm{id})\circ h + \psi - \psi\circ f^m. \] Therefore the 1--cocycle associated to any nontrivial character $\chi_{q,n}$ at time $t=1$ has the form \begin{align*} C_{\chi_{q,n}}(1,(z,s)) &= q(f(z)-z) + n\\ &= q[(g(h(z))-h(z)) + \psi(z) - \psi\circ f(z)] + n. \end{align*} Since $\mu$ is both $f$ and $g$ invariant, we get \begin{align*} H_{f,\mu}(\chi_{q,n}) &= q\int_\mathsf{S} (f(z)-z)d\mu + n\\ &= q\int_\mathsf{S} (g(h(z))-h(z))dh_*\mu + n\\ &= H_{g,\mu}(\chi_{q,n}). \end{align*} Hence, $\rho_\mu(f)=\rho_\mu(g)$. \end{proof} \item \textsf{(Continuity)} The function $\rho_\mu:\mathrm{Homeo}_+(\mathsf{S})\longrightarrow \mathsf{S}$ given by \[ f=\mathrm{id}+\varphi\longmapsto \int_\mathsf{S} \varphi d\mu \] is continuous with respect to the uniform topology in $\mathrm{Homeo}_+(\mathsf{S})$. \item \textsf{(The rotation element is equal to zero if and only if $f$ has a fixed point)} Indeed, if $f$ has a fixed point $x$ then $\varphi(x)=0$; if $\mu=\delta_x$ is the Dirac mass at $x$ then $\int_\mathsf{S} \varphi d\mu=0$ and therefore $\rho_\mu(f)=0$. On the other hand, if $\rho_\mu(f)=0$ then $\int_\mathsf{S} \varphi d\mu=0$ and $\varphi$ must vanish at some point $x$ which must be a fixed point of $f$. \end{enumerate} \subsection[The rotation element \`a la de Rham]{The rotation element \`a la de Rham} If $d\lambda$ denotes the usual Lebesgue measure on $\mathbb{S}^1$, then, given any character $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$ there is a well defined closed differential one form on $\Sigma_f(\Ss)$ given by \[ \omega_{\chi_{q,n}} := \chi_{q,n}^* d\lambda. \] Let $X$ be the vector field tangent to the flow $\phi_t$ and let $\nu$ be any $\phi_t$--invariant Borel probability measure on $\Sigma_f(\Ss)$. Define \[ H_{f,\nu}:\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb R \] by \[ H_{f,\nu}(\chi_{q,n}) := \int_{\Sigma_f(\Ss)} \omega_{\chi_{q,n}}(X) d\nu \] and observe that this definition only depends on the cohomology class of $\omega_{\chi_{q,n}}$ and the measure class of $\nu$. Hence, there is a well defined continuous homomorphism $\rho(f):\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb{S}^1$ given by \[ \rho(f)(\chi_{q,n}) := \exp(2\pi iH_{f,\nu}(\chi_{q,n})). \] Thus, as before, \[ \rho(f)\in \mathrm{Char}(\mathrm{Char}(\Sigma_f(\Ss)))\cong \Sigma_f(\Ss). \] \begin{proposition} $\rho(f)$ is the rotation element associated to $f$ corresponding to $\nu$. \end{proposition} \begin{example} Let $\alpha$ be any element in $\mathsf{S}$ and consider the rotation $R_\alpha:\mathsf{S}\longrightarrow \mathsf{S}$ given by $z\longmapsto z+\alpha$. The suspension flow $\phi_t:\mathsf{S}\times \mathbb{S}^1\longrightarrow \mathsf{S}\times \mathbb{S}^1$ is given by \[ \phi_t(z,s) = (z+m\alpha,t+s-m)\quad (m\leq t+s < m+1). \] Given any character $\chi_{q,n}\in \mathrm{Char}(\Sigma_f(\Ss))$, \[ \omega_{\chi_{q,n}} = qd\theta + nd\lambda \] and the vector field $X$ associated to $\phi_t$ is constant. In this case, $H_{R_\alpha,\mu}(\chi_{q,n}) = \alpha q + n$ and therefore \[ \rho(R_\alpha)(\chi_{q,n}) = \exp(2\pi iq\alpha). \] That is, $\rho(R_\alpha)=\alpha$ which clearly coincides with the calculation made before. \end{example} \section[Poincar\'e theory for compact abelian one dimensional solenoidal groups]{Poincar\'e theory for compact abelian one dimensional solenoidal groups} \label{Poincare_theory} This Section exhibit a demonstration of the semiconjugacy theorem for homeomorphisms of $\mathsf{S}$ isotopic to the identity which are pseudoirrational rotations. \subsection[The rotation interval]{The rotation interval} \label{rotation_interval} Recall that since $\mathcal{P}_f(\mathsf{S})$ is compact and convex, and $f$ is isotopic to the identity, the image $\mathfrak R(\mathcal{P}_f(\mathsf{S}))$ is a compact interval $I_f$, which, up to a translation, is contained in the one parameter subgroup $\mathcal{L}_0$. This interval is called the \textsf{rotation interval} of $f$. Since $\mathcal{L}_0$ is canonically isomorphic to $\mathbb R$, it is possible to identify $I_f$ with an interval in the real line. \begin{remark} If $f$ is isotopic to an irrational rotation $R_\alpha$ with $\alpha\notin \mathcal{L}_0$ (see Section below), the rotation interval $I_f$ of $f$ can be identified with $I_f\subset \mathcal{L}_0 + \alpha$. \end{remark} \begin{definition} \label{pseudoirrational_rotation} The homeomorphisms $f$ is a \emph{\textsf{pseudoirrational rotation}} if $I_f$ consists of a single point $I_f=\{\alpha\}$. In this case, and only in this case, we call $\alpha$ the \emph{rotation element} of $f$. \end{definition} \begin{remark} For homeomorphisms of tori $\mathbb{T}^n$ with $n\geq2$ the rotation set, in general, is not an interval \cite{MZ}. In our case, since the solenoids are one dimensional, the rotation set is a closed interval, which can be reduced to one point. \end{remark} \begin{remark} There are examples of \emph{diffeomorphisms} $h:\mathsf{S}\to\mathsf{S}$ (i.e. homeomorphisms which are differentiable along the leaves) such that the rotation interval is nontrivial (consists of more than one point). This is obtained by adapting the quintessential example by Katok of the torus to our case: let $X$ be the canonical unit vector field tangent to the leaves and $g:\mathsf{S}\longrightarrow \mathbb R$ a differentiable function that vanishes at a single point $p\in \mathsf{S}$, let $\{h_t\}_{t\in\mathbb R}$ be the flow of the vector field $Y=gX$, then we can take $h=h_1$. \end{remark} \subsection[Irrational rotations]{Irrational rotations} \label{irrational_rotations} Since $\mathsf{S}$ is torsion free, it follows that a nontrivial rotation has no periodic points. This means the dichotomy rational / irrational does not appear in this context and we only have to define what ``irrational'' means. The following seems to be an appropriate definition: \begin{definition} \label{irrational_element} The $\alpha\in \mathsf{S}$ is \emph{\textsf{irrational}} if $\{n\alpha:n\in \mathbb{Z}\}$ is dense in $\mathsf{S}$. In classical terminology, $\mathsf{S}$ is said to be \emph{\textsf{monothetic}} with generator $\alpha$. \end{definition} Since $\mathsf{S}$ is a compact abelian topological group, the next theorem is classical (see e.g. \cite{Gra}) \begin{theorem} \label{equidistribution} If $\alpha\in \mathsf{S}$, the following propositions are equivalent: \begin{enumerate} \item The rotation $R_\alpha:\mathsf{S}\longrightarrow \mathsf{S}$ given by $z\longmapsto z+\alpha$ is ergodic with respect to the Haar measure on $\mathsf{S}$. \item $\chi(\alpha)\neq 1$, for every nontrivial character $\chi\in \mathrm{Char}(\mathsf{S})$. \item $\mathsf{S}$ is a monothetic group with generator $\alpha$. \end{enumerate} \end{theorem} \begin{remark} \begin{enumerate} \item Any nontrivial character $\chi\in \mathrm{Char}(\mathsf{S})$ describes the solenoid $\mathsf{S}$ as a locally trivial fiber bundle over the circle $\mathbb{S}^1$ with typical fiber a Cantor group. In fact, there is such a fibration for each $q\in \mathbb{Q}\setminus \{1\}$. \item For every $\alpha\in \mathsf{S}$ and every nontrivial character, $\chi\circ R_\alpha = R_{\chi(\alpha)}\circ \chi$. \item If $\alpha\in \mathsf{S}$ is irrational then $\chi(\alpha)\in \mathbb{S}^1$ is irrational, for every nontrivial character $\chi\in \mathrm{Char}(\mathsf{S})$. \end{enumerate} \end{remark} \subsection[Generalized Poincar\'e theorem]{Generalized Poincar\'e theorem} From now on we suppose that $f$ is a pseudoirrational rotation with unique rotation element$\rho(f)$. Recall $\mathsf{S}$ is the orbit space of $\mathbb R\times \widehat{\Z}$ under the $\mathbb{Z}$ -- action \[ \gamma\cdot (t,k) = (t+\gamma,k-\gamma)\qquad (\gamma\in \mathbb{Z}). \] Denote by $p:\mathbb R\times \widehat{\Z}\longrightarrow \mathsf{S}$ the canonical projection. It is clear that $p$ is an infinite cyclic covering. If $F:\mathbb R\times \widehat{\Z}\longrightarrow \mathbb R\times \widehat{\Z}$ is a lifting of $f$ to $\mathbb R\times \widehat{\Z}$, $F$ has the form \[ F(t,k) = (F_k(t),R_\alpha(k)), \] where $\widehat{\Z}\longrightarrow \mathrm{Homeo}(\mathbb R)$ is a continuous function given by $k\longmapsto F_k$, $F_k:\mathbb R\longrightarrow \mathbb R$ is a homeomorphism with limit periodic displacement $\Phi_t(x)$ (i.e., $\Phi$ is a uniform limit of periodic functions) and $\alpha\in \widehat{\Z}$ is a monothetic generator. The condition of $F$ being equivariant with respect to the $\mathbb{Z}$ -- action is: \[ F_{k-\gamma}(t+\gamma) = F_k(t) + \gamma, \] for any $\gamma\in \mathbb{Z}$. That is, $F$ commutes with the integral translation $T_\gamma:\mathbb R\times \widehat{\Z}\longrightarrow \mathbb R\times \widehat{\Z}$ given by $(t,k)\longmapsto (t+\gamma,k)$ and also must be invariant under the $\mathbb{Z}$ -- action in $\mathrm{C}(\widehat{\Z},\mathrm{Homeo}(\mathbb R))$. \begin{remark} It is very important to emphasize at this point that a lifting $F$ of $f$ exists and it is a homeomorphism of $\mathbb R\times \widehat{\Z}$ due to the fact that $f$ is isotopic to the identity, which implies that $f$ keeps invariant the one dimensional leaves of the solenoid. As a consequence of this fact, $F$ leaves invariant the one dimensional leaves of $\mathbb R\times \widehat{\Z}$. Since each leaf is canonically identified with $\mathbb R$, the \emph{displacement function} along the leaves can be defined in an obvious way. \end{remark} By the comments above it is adequate to consider $F$ as follows: \[ F(t,k)=\left( F_k(t),k \right)\qquad (t,k)\in \mathbb R\times\widehat{\Z}, \] where $F_k:\mathbb R\longrightarrow \mathbb R$ is a homeomorphism which depends continuously on $k\in \widehat{\Z}$. \begin{definition} \label{bounded_meanvariation} The homeomorphism $f$ has \emph{\textsf{bounded mean variation}} if there exists $C>0$ such that the sequence $\{F_k^n(t) - t - n\tau(F)\}_{n\geq 1}$ is uniformly bounded by $C$, i.e. $C$ is independent of $(t,k)$. Here, $F_k^n$ is any lift of $f^n$ and $\tau(F)$ is a lifting of $\rho(f)$ to $\mathbb R\times \widehat{\Z}$ contained in the same leaf as $(t,k)$. \end{definition} \begin{remark} Observe that if the sequence $\{F_k^n(t) - t - n\tau(F)\}_{n\geq 1}$ is uniformly bounded, the sequence $\{F_k^n(t) - t - n\tau'\}_{n\geq 1}$ is also uniformly bounded if and only if $\tau'=\tau(F)$. Therefore, if a homeomorphism has bounded mean variation it is a pseudoirrational rotation. \end{remark} We can now state and prove the generalized version of Poincar\'e's theorem: the proof follows closely the classical proof (see \cite{Ghys}, \cite{Nav}). \begin{theorem} \label{Poincare_theorem} If $f\in \mathrm{Homeo}_+(\mathsf{S})$ is a pseudoirrational rotation with irrational rotation element $\rho(f)$, then $f$ is semiconjugated to the irrational rotation $R_{\rho(f)}$ if and only if $f$ has bounded mean variation. \end{theorem} \begin{proof} The function $H:\mathbb R\times \widehat{\Z}\longrightarrow \mathbb R\times \widehat{\Z}$ given by \[ (t,k)\longmapsto (\sup_n \, \{F_k^n(t) - n\tau(F)\},k) \] satisfies the following properties: \begin{enumerate} \item $H$ is nondecreasing, surjective and continuous on the left. \item $H\circ T_1 = T_1\circ H$ \item $H\circ F = T_{\tau(F)}\circ H$. \end{enumerate} The proof that $H$ is nondecreasing follows from the fact that the lifting $F^n$ of $f^n$ is nondecreasing. The other properties on Condition (1) and Condition (2) are direct consequences of the definition of $H$ as a supremum. Condition (2) implies that $H$ descends to a map $h:\mathsf{S}\longrightarrow\mathsf{S}$. Condition (3) implies that $h\circ{f}=R_{\rho(f)}\circ{h}$. Following almost \emph{verbatim} the arguments in (\cite{Ghys}, and \cite{Nav}, Theorem 2.2.6), it follows that $h$ is continuous and semiconjugates $f$ to $R_{\rho(f)}$. This follows from the fact that $F$ preserves each leaf of the form $\mathbb R\times \{k\}$, with $k\in \widehat{\Z}$ and the map $g:\mathbb R\longrightarrow \mathbb R$ given by $t\longmapsto \mathfrak{p}(F(t,k))-t$ is a quasihomomorphism. \end{proof} The immediate question arises: \begin{question} \label{question_conjugacy} Under the same hypothesis of this theorem, is $f$ conjugated to the rotation $R_{\rho(f)}$ when $f$ is minimal? \end{question} This question and a complete development of the topological dynamics is the subject of next research (see \cite{CV}). \section[The rotation element of a homeomorphism isotopic to a translation]{The rotation element of a homeomorphism isotopic to a translation} \label{rotation_translation} This Section introduces an appropriate definition of a rotation element for a homeomorphism of the one dimensional universal solenoid which is isotopic to a minimal translation by an element which is not in the base leaf. First, it is done the description of the suspension of a minimal translation in a general compact abelian group $G$, which happens to be also a compact abelian group. It follows, as a corollary, that the suspension of any homeomorphism of $G$ which is isotopic to a minimal translation is also a compact abelian group. This result is used to define the rotation element, generalizing the notion of Poincar\'e by using the theory of asymptotic cycles. \subsection[The suspension of a homeomorphism isotopic to a translation]{The suspension of a homeomorphism isotopic to a translation}\label{suspension_translation} Let $G$ be a metrizable compact abelian group and let $g\in G$. Consider the subgroup $\Gamma_g$ of the product group $G\times \mathbb R$ defined as follows: \[ \Gamma_g := \left\{ (g^n,n)\in G\times \mathbb R : n\in\mathbb{Z} \right\}. \] $\Gamma_g$ is isomorphic to $\mathbb{Z}$, and there is a monomorphism $n\longmapsto (g^n,n)$. The group $\Gamma_g$ is a discrete, closed and normal subgroup of $G\times \mathbb R$. Also, $\Gamma_g$ is cocompact since $(G\times\mathbb R) / \Gamma_g$ is the image $p(G\times [0,1])$ of the canonical epimorphism $p : G\times\mathbb R \longrightarrow (G\times \mathbb R)/\Gamma_g$. \begin{remark} Let $G$ be a metrizable compact abelian group and consider a translation $T:z\longmapsto \alpha z$. The \emph{\textsf{suspension}} of $T$ is the space \[ \Sigma_T(G) : = G\times \mathbb R / (x,1)\sim (T(x),0). \] In fact, the group $(G\times\mathbb R)/\Gamma_g$, as a topological space, is homeomorphic to the suspension of the translation $T_g:G\longrightarrow G$, so we denote alternatively the group $(G\times\mathbb R)/\Gamma_g$ as $\Sigma_{T_g}(G)$. \end{remark} The theorem follows: \begin{theorem} $\Sigma_T(G)$ is a compact abelian group which contains $G$ as a closed subgroup and \[ \Sigma_T(G)/G \cong \mathbb{S}^1. \] \end{theorem} \begin{corollary} Since the suspension only depends on the isotopy class of the homeomorphism, the suspension of any homeomorphism $f:G\longrightarrow G$ isotopic to a translation, is a compact abelian group. If the translation is a minimal translation then the suspension flow is a minimal flow (in fact the orbit through the identity is a dense one parameter subgroup). \end{corollary} \begin{example} \begin{enumerate} \item The $2$ -- torus $\mathbb{T}^2$ is the suspension of an irrational rotation on the circle. \item The universal solenoid $\mathsf{S}$ is a suspension of a minimal translation of $\widehat{\Z}$. \end{enumerate} \end{example} For the particular case of the universal solenoid $\mathsf{S}$, it is not true that any homeomorphism is isotopic to a translation. In fact, in \cite{Odd} it is proved the following result: \begin{theorem} If $\mathrm{Homeo}_\mathcal{L}(\mathsf{S})$ is the subgroup of $\mathrm{Homeo}(\mathsf{S})$ consisting of homeomorphisms of $\mathsf{S}$ that preserves the base leaf, then \[ \mathrm{Homeo}(\mathsf{S}) \cong \mathrm{Homeo}_\mathcal{L}(\mathsf{S}) \times_\mathbb{Z} \widehat{\Z}. \] \end{theorem} For instance, by Pontryagin duality, the group of automorphisms of $\mathsf{S}$ is isomorphic to the group of automorphisms of $\mathbb{Q}$, which is $\mathbb{Q}^*$, since any automorphism is determined by its value at $1$. Hence, any automorphism of $\mathsf{S}$ is never isotopic to a translation. \begin{remark} The elements in the same one dimensional leaf $\mathcal{L}$ of $\mathsf{S}$ determines isotopic translations. If an element $f\in \mathrm{Homeo}(\mathsf{S})$ is isotopic to a translation then $f$ is isotopic to a translation of the form $\mathfrak{t} +\gamma$, where $\gamma\in \mathcal{L}\cap \widehat{\Z}$. \end{remark} \subsection[The rotation element of a homeomorphism isotopic to a translation]{The rotation element with respect to an invariant measure of a homeomorphism isotopic to a translation} \label{rotation_isotopic-translations} Suppose that $f:\mathsf{S}\longrightarrow \mathsf{S}$ is a homeomorphism which is isotopic to a minimal translation by an element not in the base leaf. According to the last section, the suspension $\Sigma_f(\Ss)$ is a compact abelian group and there is a natural continuous group epimorphism $\Sigma_f(\Ss) \longrightarrow \mathbb{S}^1$ whose kernel is a closed subgroup of $\Sigma_f(\Ss)$ isomorphic to $\mathsf{S}$. Hence, there is an exact sequence of compact abelian groups \[ 0\longrightarrow \mathsf{S}\longrightarrow \Sigma_f(\Ss)\longrightarrow \mathbb{S}^1 \longrightarrow 0. \] By duality, there is an exact sequence of discrete groups \[ 0\longrightarrow \mathbb{Z}\longrightarrow \mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb{Q} \longrightarrow 0. \] In this situation, we do not know an explicit description for $\mathrm{Char}(\Sigma_f(\Ss))$ and its elements. Hence, the calculation of the $1$--cocyle to describe the homomorphism $H_{f,\mu}\in \mathrm{Hom}(\mathrm{Char}(\Sigma_f(\Ss)),\mathbb R)$ is not quite clear as we had in the isotopic to the identity case. However, knowing the fact that $\Sigma_f(\Ss)$ is a compact abelian group, it is possible to calculate the values of $H_{f,\mu}$ by restricting the elements in $\mathrm{Char}(\Sigma_f(\Ss))$ to elements in $\mathrm{Char}(\mathsf{S})$. Proceeding as in Section \ref{rotation_set}, this can be done in the following way (Compare \cite{Ath}). Denote by $[z,t]$ the elements in the suspension $\Sigma_f(\Ss)$ which are now equivalence classes of pairs $(z,t)$ under the suspension relation. The suspension flow is given by \[ \phi(t,[z,s]) = [f^m(z),t+s-m], \] where $m\leq t+s < m+1$. As before, the canonical projection $\pi:\mathsf{S}\times [0,1]\longrightarrow \Sigma_f(\Ss)$ sends $\mathsf{S}\times \{0\}$ homeomorphically onto its image $\pi(\mathsf{S}\times \{0\})\equiv \mathsf{S}$ and every orbit of the suspension flow intersects $\mathsf{S}$. If $\nu$ is any $\phi_t$--invariant Borel probability measure on $\Sigma_f(\Ss)$, then there is a well defined homomorphism $H_{f,\nu}:\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb R$ given by \[ H_{f,\nu}(\chi) = \int_{\Sigma_f(\Ss)} C_{\chi}(1,[z,s]) d\nu, \] where $C_{\chi}(1,[z,s])$ is the $1$--cocycle associated to any nontrivial character $\chi\in \mathrm{Char}(\Sigma_f(\Ss))$, at time $t=1$. Now recall the $1$--cocycle associated to $\chi$ satisfies the relation (see Section \ref{rotation_set}) \[ C_{\chi}(t+u,[z,s]) = C_{\chi}(u,\phi_t([z,s]) + C_{\chi}(t,[z,s]), \] for every $t,u\in \mathbb R$ and $[z,s]\in \Sigma_f(\Ss)$. Letting $s=0$, $u=t$ and $t=1$ in this relation it is obtained \[ C_{\chi}(1+t,[z,0]) = C_{\chi}(t,[f(z),0]) + C_{\chi}(1,[z,0]). \] Now setting $u=1$ and $s=0$ and applying the cocycle condition on the left hand of this expression we obtain: \[ C_{\chi}(1+t,[z,0]) = C_{\chi}(1,[z,t]) + C_{\chi}(t,[z,0]). \] Replacing this last equality in the first relation, rearranging the terms, and setting $t=s$, it follows that for any $s\in [0,1)$ and $z\in \mathsf{S}$ it holds \[ C_\chi(1,[z,s]) = C_\chi(s,[f(z),0]) + C_\chi(1,[z,0]) - C_\chi(s,[z,0]). \] If $\mu$ is the $f$--invariant Borel probability measure on $\mathsf{S}$ obtained by disintegration of $\nu$ with respect to the fibers, by replacing the last expression in the definition of $H_{f,\nu}(\chi)$ and using Fubini's theorem, we get \begin{align*} H_{f,\nu}(\chi) &= \int_{\Sigma_f(\Ss)} C_{\chi}(1,[z,s]) d\nu \\ &= \int_0^1 \left( \int_{\mathsf{S}} C_{\chi}(1,[z,s]) d\mu \right) ds \\ &= \int_0^1 \left( \int_{\mathsf{S}} C_{\chi}(1,[z,0]) d\mu \right) ds + \int_0^1 \left( \int_{\mathsf{S}} \left[ C_\chi(s,[f(z),0]) - C_\chi(s,[z,0])\right] d\mu \right) ds \\ &= \int_{\mathsf{S}} C_{\chi}(1,[z,0]) d\mu. \end{align*} Hence \[ H_{f,\nu}(\chi) = \int_{\mathsf{S}} C_{\chi}(1,[z,0]) d\mu. \] Since $\chi$ is any nontrivial character in $\mathrm{Char}(\Sigma_f(\Ss))$, by restricting $\chi$ to $\mathsf{S}$, one obtains a nontrivial character $\chi_q\in \mathrm{Char}(\mathsf{S})$. Applying Proposition \ref{associated_cocycle} to $\chi_q$, the following relation is obtained \[ \chi_q(f(z)) = \exp(2\pi i C_{\chi_q}(1,[z,0]) \, \chi_q(z). \] This implies that $q(f(z)-z) - C_{\chi_q}(1,[z,0]) \in \mathbb{Z}$, and, since $q(f-\mathrm{id}) - C_{\chi_q}(1,[\cdot,0]))$ is a continuous function on $\mathsf{S}$, we conclude that $C_{\chi_q}(1,[z,0]) = q(f(z) - z)$ for any $z\in \mathsf{S}$. Since $f(z) - z = \varphi(z)$ is the displacement function along the leaves, the value of the homomorphism $H_{f,\nu}$, which now depends on $\mu$, at any character $\chi\in \mathrm{Char}(\Sigma_f(\Ss))$ is given by \[ H_{f,\mu}(\chi) = q\int_\mathsf{S} \varphi d\mu. \] Hence, for each $\mu\in \mathcal{P}_f(\mathsf{S})$, there exists a well defined continuous homomorphism \[ \rho_\mu:\mathrm{Char}(\Sigma_f(\Ss))\longrightarrow \mathbb{S}^1 \] given by \begin{align*} \rho_\mu(\chi) &:= \exp(2\pi iH_{f,\mu}(\chi))\\ &= \exp \left(2\pi i q\int_\mathsf{S} \varphi d\mu\right). \end{align*} This allows to establish the more general definition: \begin{definition} If $f:\mathsf{S}\longrightarrow \mathsf{S}$ is any homeomorphism which is isotopic to a rotation by an element not in the base leaf, the element $\rho_\mu(f) := \rho_\mu \in \mathsf{S}$ defined as above is the \emph{\textsf{rotation element}} associated to $f$ with respect to the measure $\mu$. \end{definition} \begin{remark} If $f$ is isotopic to an irrational rotation $R_\alpha$ with $\alpha\notin \mathcal{L}_0$, then the rotation interval $I_f$ of $f$ can be identified with $I_f\subset \mathcal{L}_0 + \alpha$. \end{remark} As indicated in the Introduction (see Section \ref{introduction}), the theory developed in this paper can be rewritten verbatim for any compact abelian one dimensional solenoidal group, since, by Pontrjagin's duality theory, any such group is the Pontryagin dual of a nontrivial additive subgroup $G\subset \mathbb{Q}$, where $\mathbb{Q}$ has the discrete topology. Denote by $\mathsf{S}_G$ such a group. According with the theory developed here, the following result is plausible: \begin{theorem} \label{solenoidal_Poincare-theorem} Suppose that $f:\mathsf{S}_G\longrightarrow \mathsf{S}_G$ is any homeomorphism isotopic to the identity, or isotopic to a rotation by an element not in the base leaf, with irrational rotation element $\rho(f)$. The homeomorphism $f$ is semiconjugated to the irrational rotation $R_{\rho(f)}$ if and only if $f$ has bounded mean variation. \end{theorem} The question remains: \begin{question} Under the same hypothesis, is $f$ is conjugated to the rotation $R_{\rho(f)}$ when $f$ is minimal? \end{question} \end{document}
arXiv
\begin{document} \title{On the diameter of the commuting graph of the full matrix ring over the real numbers} \subjclassname{05C50; 15A27} \keywords{Keywords: Commuting graph, Diameter, Idempotent matrix} \begin{abstract} In a recent paper C. Miguel proved that the diameter of the commuting graph of the matrix ring $\mathrm{M}_n(\mathbb{R})$ is equal to $4$ either if $n=3$ or $n>4$. But the case $n=4$ remained open, since the diameter could be $4$ or $5$. In this work we close the problem showing that also in this case the diameter is equal to $4$. \end{abstract} \section{Introduction} For a ring $R$, the \textit{commuting graph} of $R$, denoted by $\Gamma(R)$, is a simple undirected graph whose vertices are all non-central elements of $R$, and two distinct vertices $a$ and $b$ are adjacent if and only if $ab = ba$. The commuting graph was introduced in \cite{Ak1} and has been extensively studied in recent years by several authors \cite{Ak2,Ak3,Ak4,Ak5,Araujo,doli,mo, omi}. In a graph $G$, a path $\mathcal{P}$ is a sequence of distinct vertices $(v_1,\cdots v_k)$ such that every two consecutive vertices are adjacent. The number $k-1$ is called the length of $\mathcal{P}$. For two vertices $u$ and $v$ in a graph $G$, the distance between $u$ and $v$, denoted by $d(u,v)$, is the length of the shortest path between $u$ and $v$, if such a path exists. Otherwise, we define $d(u,v) =\infty$. The diameter of a graph $G$ is defined $$\textrm{diam}(G) = \sup \{d(u,v) : \textrm{ \textit{u} and \textit{v} are distinct vertices of $G$\}}.$$ A graph $G$ is called connected if there exists a path between every two distinct vertices of $G$. Much research has been conducted regarding the diameter of commuting graphs of certain classes of rings \cite{Ak3,doli,Dolan1,Giu}. In the case of matrix rings over an algebraically closed field $\mathbb{F}$, $\mathrm{M}_n(\mathbb{F})$, it was proved \cite{Ak3} that the commuting graph with $n>2$ is connected and its diameter is always equal to four; while if $n = 2$ the commuting graph is always disconnected \cite{Ak5}. On the other hand, if the field $\mathbb{F}$ is not algebraically closed, the commuting graph $\Gamma(\mathrm{M}_n(\mathbb{F}))$ may be disconnected for an arbitrarily large integer $n$ \cite{Ak4}. However, for any field $\mathbb{F}$ and $n\geq 3$, if $\Gamma(\mathrm{M}_n(\mathbb{F}))$ is connected, then the diameter is between four and six \cite{Ak3}. Moreover, this diameter is conjectured to be at most 5 and if $n=p$ is prime it is proved that the diameter is, in fact, 4. Quite recently, C. Miguel \cite{cel} has verified this conjecture in the case $\mathbb{F}=\mathbb{R}$ proving the following result. \begin{teor} Let $n\geq 3$ be any integer. Then, $\textrm{diam}(\Gamma(\mathrm{M}_n(\mathbb{R})))=4$ for $n\neq 4$ and $4\leq \textrm{diam}(\Gamma(\mathrm{M}_4(\mathbb{R})))\leq 5$. \end{teor} Unfortunately, this result left open the question wether $\textrm{diam}(\Gamma(\mathrm{M}_4(\mathbb{R})))$ is $4$ or $5$. In this paper we solve this open problem. Namely, we will prove the following result. \begin{teor} For every $n\geq 3$, $\textrm{diam}(\Gamma(\mathrm{M}_n(\mathbb{R})))=4$. \end{teor} \section{On the diameter of $\Gamma(\mathrm{M}_n(\mathbb{R})$} Before we proceed, let us introduce some notation. If $a,b\in\mathbb{R}$, we define the matrix $A_{a,b}$ as $$A_{a,b}:=\begin{pmatrix} a & b\\ -b & a\end{pmatrix}.$$ Now, given two matrices $X,Y\in M_2(\mathbb{R})$, we define $$X\oplus Y:=\begin{pmatrix} X & 0\\ 0 & Y\end{pmatrix}\in \mathrm{M}_4(\mathbb{R}).$$ Finally, two matrices $A,B\in \mathrm{M}_2(\mathbb{R})$ are similar ($A\sim B$) if there exists a regular matrix $P$ such that $P^{-1}AP=B$. As we have pointed out in the introduction, in \cite[Theorem 1.1.]{cel} it is proved that the diameter of $\Gamma(\mathrm{M}_n(\mathbb{R}))$ is equal to 4 if $n\geq 3, n\neq 4$ and that $4\leq\textrm{diam}(\Gamma(\mathrm{M}_4(\mathbb{R})))\leq 5$. The proof given in that paper relies on the possible forms of the Jordan canonical form of a real matrix. In particular, it is proved that the distance between two matrices $A,B\in \mathrm{M}_4(\mathbb{R})$ is at most 4 unless we are in the situation where $A$ and $B$ have no real eigenvalues and only one of them is diagonalizable over $\mathbb{C}$. In other words, the case when $$A\sim \begin{pmatrix} A_{a,b} & 0 \\ 0 & A_{c,d} \end{pmatrix},\quad B\sim \begin{pmatrix} A_{s,t} & I_2 \\ 0 & A_{s,t} \end{pmatrix}.$$ The following result will provide us the main tool to prove that the distance between $A$ and $B$ is at most $4$ also in the previous setting. It is true for any division ring $D$. \begin{prop} \label{prop} Let $A,B\in \mathrm{M}_n(D)$ matrices such that $A^2=A$ and $B^2=0$. Then, there exists a non-scalar matrix commuting with both $A$ and $B$. \end{prop} \begin{proof} Since $A^2=A$; i.e., $A(I-A)=(I-A)A=0$, then one of nullity $A$ or nullity $(I-A)$ is at least $n/2$. Since $I-A$ is also idempotent and a matrix commutes with $A$ if and only if it commutes with $I-A$ we can assume that nullity $A\geq n/2$. On the other hand, since $B^2=0$, it follows that nullity $B\geq n/2$. Now, if $\textrm{Ker} L_A\cap\textrm{Ker} L_B\neq\{0\}$ and $\textrm{Ker} R_A\cap\textrm{Ker} R_B\neq\{0\}$ we can apply \cite[Lemma 4]{Ak3} and the result follows. Hence we assume that $\textrm{Ker} L_A\cap\textrm{Ker} L_B=\{0\}$ (if it was $\textrm{Ker} R_A\cap\textrm{Ker} R_B=\{0\}$ we could consider $A^t$ and $B^t$). Note that, in these conditions, $n=2r$ and nullity $A$ and nullity $B$ are equal to $r$. Let $\mathcal{B}_1$ and $\mathcal{B}_2$ be bases for $\textrm{Ker} L_A$ and $\textrm{Ker} L_B$, respectively, and consider $\mathcal{B}=\mathcal{B}_1\cup\mathcal{B}_2$ a basis for $D^n$. Since $A$ is idempotent, it follows that $D^n=\textrm{Ker} L_A\oplus \textrm{Im} L_A$. We want to construct the matrix of $L_A$ in the basis $\mathcal{B}$. To do so, if $v\in\mathcal{B}_2$, we write $v=a+a'$ with $a\in\textrm{Ker}L_A$ and $a'\in\textrm{Im}L_A$. Hence, $Av=Aa+Aa'=0+A(Aa'')=Aa''=a'=-a+v$. Since it is clear that $Av=0$ for every $v\in\mathcal{B}_1$ we get that the matrix of $L_A$ in the basis $\mathcal{B}$ is of the form $$\begin{pmatrix} 0 & A'\\ 0 & I_r\end{pmatrix},$$ with $A'\in \mathrm{M}_r(D)$. Now we want to construct the matrix of $L_B$ in the basis $\mathcal{B}$. Clearly $Bv=0$ for every $v\in\mathcal{B}_2$. Now, let $w\in\mathcal{B}_1$. Then, $Bw=w_1+w_2$ with $w_1\in\textrm{Ker} L_A$ and $w_2\in\textrm{Ker}L_B$. Hence, $0=B^2w=Bw_1$ and $w_1\in\textrm{Ker} L_A\cap\textrm{Ker} L_B=\{0\}$. Thus, the matrix of $L_B$ in the basis $\mathcal{B}$ is of the form: $$\begin{pmatrix} 0 & 0\\ B' & 0\end{pmatrix},$$ with $A'\in \mathrm{M}_r(D)$. As a consequence of the previous work we can find a regular matrix $P$ such that: $$PAP^{-1}=\begin{pmatrix} 0 & A'\\ 0 & I_r\end{pmatrix},\quad PBP^{-1}=\begin{pmatrix} 0 & 0\\ B' & 0\end{pmatrix}.$$ Now, if $A'B'\neq B'A'$ we can consider the matrix $$P^{-1}(A'B'\oplus B'A')P=P^{-1}\begin{pmatrix} A'B' & 0\\ 0 & B'A'\end{pmatrix}P,$$ which is clearly non-scalar and commutes with $A$ and $B$. On the other hand, if $A'$ and $B'$ commute, we can find a non-scalar matrix $S\in \mathrm{M}_r(D)$ commuting with both $A'$ and $B'$. Therefore $P^{-1}(S\oplus S)P$ commutes with both $A$ and $B$ and the proof is complete. \end{proof} In addition to this result, we will also need the following technical lemmata. \begin{lem} If $A\sim \begin{pmatrix} A_{a,b} & 0 \\ 0 & A_{c,d} \end{pmatrix}$, then there exists an idempotent non-scalar matrix $M$ such that $AM=MA$. \end{lem} \label{l1} \begin{proof} $A=P^{-1} \begin{pmatrix} A_{a,b} & 0 \\ 0 & A_{c,d} \end{pmatrix}P$ for some regular $P\in \mathrm{M}_4(\mathbb{R})$. Hence, it is enough to consider $M=P^{-1} \begin{pmatrix} 0 & 0 \\ 0 & I_2 \end{pmatrix}P$. \end{proof} \begin{lem} \label{l2} If $B\sim \begin{pmatrix} A_{s,t} & I_2 \\ 0 & A_{s,t} \end{pmatrix}$, then there exists a non-scalar matrix $N$ such that $N^2=0$ and $BN=NB$. \end{lem} \begin{proof} $B=P^{-1}\begin{pmatrix} A_{s,t} & I_2 \\ 0 & A_{s,t} \end{pmatrix}P$ for some regular $P\in \mathrm{M}_4(\mathbb{R})$. Hence, it is enough to consider $N=P^{-1}\begin{pmatrix} 0 & I_2 \\ 0 & 0 \end{pmatrix} P$. \end{proof} We are now in the condition to prove the main result of the paper. \begin{teor} The diameter of $\Gamma(\mathrm{M}_4(\mathbb{R}))$ is four. \end{teor} \begin{proof} In \cite{cel} it was proved that $d(A,B)\leq 4$ for every $A,B\in \mathrm{M}_4(\mathbb{R})$ unless $A\sim \begin{pmatrix} A_{a,b} & 0 \\ 0 & A_{c,d} \end{pmatrix}$ and $B\sim \begin{pmatrix} A_{s,t} & I_2 \\ 0 & A_{s,t} \end{pmatrix}$. Hence, we only focus on this case. By Lemma \ref{l1} there exists an idempotent non-scalar matrix $M$, such that $AM=MA$. Also, by Lemma \ref{l2}, there exists a non-scalar matrix $N$ such that $N^2=0$ and $NB=BN$. Finally, Proposition \ref{prop} implies that there exists a non-scalar matrix $X$ that commutes both with $M$ and $N$. Thus, we have found a path $(A,M,X,N,B)$ of length $4$ connecting $A$ and $B$ and the result follows. \end{proof} \end{document}
arXiv
Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male (BFH-OSTM) vs Fracture Risk Assessment Tool (FRAX) for identifying painful new osteoporotic vertebral fractures in older Chinese men: a cross-sectional study Ning An1 na1, Ji Sheng Lin1 na1 & Qi Fei1 BMC Musculoskeletal Disorders volume 22, Article number: 596 (2021) Cite this article To compare the validation of four tools for identifying painful new osteoporotic vertebral compression fractures (PNOVCFs) in older Chinese men: bone mineral density (BMD), Asian osteoporosis self-assessment tool (OSTA), World Health Organization fracture risk assessment tool (FRAX) (without BMD) and Beijing Friendship Hospital Osteoporosis Self-Assessment Tool (BFH-OSTM). A cross sectional study was conducted from 2013 to 2019. A total of 846 men aged ≥50 were included and were divided into two groups: Fracture Group (patients with PNOVCFs underwent percutaneous vertebroplasty surgery) and Non-Fracture Group (community dwelled subjects for healthy examination). All subjects accepted a dual-energy X-ray BMD test and a structured questionnaire. The results of BMD, OSTA, FRAX and BFH-OSTM scores were assessed and receiver-operating characteristic (ROC) curves were generated to compare the validity of four tools for identifying PNOVCFs. Optimal cutoff points, sensitivity, specificity, and areas under the ROC curves (AUCs) were determined. There were significant differences including BMD T score (femoral neck, total hip and L1-L4), OSTA, FRAX and BFH-OSTM scores between Fracture group and Non-fracture group. Compared to BMD and OSTA, BFH-OSTM and FRAX had better predictive value, the sensitivity, specificity and AUC value are 0.841, 81.29%, 70.67% and 0.796, 74.85%, 78.52%, respectively. Compared with FRAX, the BFH-OSTM has a better AUC value. Both BFH-OSTM and FRAX can be used to identify POVCFs, However, BFH-OSTM model may be a more simple and effective tool to identify the risk of POVCFs in Chinese elderly men. Osteoporotic vertebral compression fracture (OVCF) is the most common complication of primary osteoporosis, which often occurs in postmenopausal women, and also troubles many elderly men. 1/5 of men around the world are threatened by osteoporotic fractures after the age of 50 [1]. Fragility fractures can cause substantial pain and severe disability, often leading to a reduced quality of life, and vertebral fractures are associated with decreased life expectancy [2]. Osteoporotic vertebral fractures accounted for 0.83% of the global burden of non-communicable diseases [3]. More than 40% of patients failed to achieve significant pain relief within 12 months [4, 5]. More worryingly, the long-term mortality rate of patients with a history of OVCF was significantly higher than that of the general population. Finally, the hospital mortality rate of OVCF patients ranged from 0.3% to 1.7%. This also invisibly increases the social and economic burden. How to early identify the painful new osteoporotic vertebral compression fractures (PNOVCFs) is facing major challenges all over the world, especially in primary hospitals [6]. The clinical onset of Older men PNOVCF is hidden, the patients have only a history of mild low energy injury or even no any trauma history, the pain degree of the patients varies greatly, some of them develop into chronic pain, and the physical examination often does not have clear localization signs (some patients even complain about the pain site is not consistent with the actual fracture level) [7]. These characteristics make PNOVCF easy to be misdirected or missed, especially in primary hospitals with limited professional experience and equipment. New vertebral fracture fractures cause unbearable pain and a series of complications caused by bed rest are extremely painful for patients. Early screening and diagnosis of high-risk male may play an important role in reducing the incidence of severe events and mortality. Therefore, it is very necessary to develop an appropriate simple screening tool based on clinical risk factors for PNOVCF, especially to aid physicians with limited professional experience and equipment [8]. OSTA, an Asian osteoporosis self-assessment tool developed by Koh et al, is based on age and weight to assess the risk of osteoporosis with high sensitivity and acceptable specificity [9]. Some clinical results suggest that OSTA (cutoff < −1) revealed a sensitivity of 32.3%, a specificity of 92.3%, and AUC of 0.618 in identifying subjects with osteoporotic vertebral compression fracture in population aged 40 years and above residing in Malaysia [10]. Among them, male accounts for nearly 50%, which partly show the identifying effect of OSTA on older men OVCF. Our previous results also showed that the AUC was 0.661 with a cutoff of −1.2 and sensitivity of 53.15% and, a specificity of 76.88% in the population of Chinese men aged 50 years and above consecutively recruited from the Osteoporosis Clinic at Beijing Friendship Hospital [11]. It may also be helpful to identify postmenopausal women with OVCF. However, OSTA still lacks sufficient confirmation in identifying PNOVCF. In 2008, the World Health Organization introduced the fracture risk assessment tool FRAX to assess the absolute risk of osteoporotic fractures in patients [12]. To predict the likelihood of severe osteoporotic fractures within 10 years, FRAX considered the interaction of risk factors such as age, gender, and personal and family history. In addition, the incidence of fractures varies widely from country to country, and FRAX can calibrate risk factors according to different countries. Our preliminary cross-sectional study confirmed that FRAX can be used as a predictive tool to help detect PNOVCF. The AUC of the FRAX tool was 0.738 with a cutoff of 2.9%, a sensitivity of 81.98% and a specificity of 62.0% [11]. However, in clinical practice, FRAX needs to include seven risk factors, which is not easy to be promoted in primary or community hospitals. Fragile fracture history is an important risk factor not only for osteoporosis, but also for OVCF [13, 14]. However, our previous study developed a clinical screening tool (BFH-OSTM) based on two clinical risk factors including the history of fragility fracture. Previous studies have confirmed that it can well identify male osteoporosis, the BFH-OSTM index (cutoff = 70) had a sensitivity of 85% and specificity of 53% for identifying osteoporosis according to the WHO criteria, with an area under the ROC curve of 0.763. However, is difficult to know whether it has any value in detecting and identifying PNOVCF [15]. Therefore, this cross sectional study evaluated and compared the validation of BMD, OSTA and FRAX (without BMD) and our constructed BFH-OSTM in identifying POVCF in a Chinese elderly men population. This cross sectional study was approved by the Ethics Committee of Beijing Friendship Hospital, Capital Medical University, and all subjects provided signed informed consent. Our study confirms that all methods are implemented in accordance with the relevant guidelines and regulations. The main flow chart of the study was shown in Fig. 1. BMD bone mineral density, OSTA Osteoporosis self-Assessment Tool for Asians, FRAX fracture risk assessment tool, BFH-OSTM Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male The subjects in this study included Chinese men aged ≥50 years who came to the Orthopedic Clinic of Beijing Friendship Hospital from June 2013 to February 2019. All subjects accepted a dual-energy X-ray BMD test and a structured questionnaire. These men included confirmed POVCFs patients with clinical symptoms verified by X-ray, MRI and other examinations within the past 6 months (Fracture group); and community dwelled men for health examination and bone mineral density screening (Non-fracture group). All subjects were required to fill in a questionnaire by a trained interviewer to provide information regarding demographic variables and clinical risk factors for osteoporosis using a structured table. We set these potential risk factors in the questionnaire were identified from previous researches. These factors included age, height, weight, body mass index (BMI), previous fracture, current smoking, consumption of three of more alcoholic drinks per day, glucocorticoids, rheumatoid arthritis and parent fractured hip history [16]. Measure height with a stadiometer (Mahr GmbH, Gottingen, Germany). The weight was measured on an electronic scale (Tanita, Tokyo, Japan), and the subjects wore lightweight indoor clothes and without shoes. Men in the Fracture group and the Non-fracture group both met the following inclusion criteria: aged ≥50 years; Han Chinese nationality; lived locally ≥20 years; willing to participate in this study and signed the informed consent. Excluded was anyone with a history or evidence of metabolic bone disease (such as type I diabetes, parathyroid dysfunction, Paget's disease, osteomalacia); the history of organ transplant; bone metastasis of cancer; severe renal impairment; a condition of prolonged immobility (such as spinal cord injury, stroke, muscular dystrophy or ankylosis spondylitis); or the previous use of anti-resorptive drugs (eg, bisphosphonate, estrogen, selective estrogen receptor modulators, and calcitonin) or anabolic agents (eg, fluoride or parathyroid hormone) will be included [17]. The main indications for PVP for the surgery include the following: (1) Acute OVCF (Magnetic resonance imaging [MRI] on T1-weighted images showed a low signal. MRI on T2-weighted images and short tau inversion recovery sequences showed high signal; (2) VAS (visual analogue scale) ≥6; (3) The patient refused to receive conservative treatment [18]. Fracture group and identification of POVCF We defined four necessary clinical criteria to determine POVCF. These standards are as following [11]: (1) Men ≥50 years old with no obvious history of trauma or fracture history of low energy trauma. A low-energy traumatic fracture is defined as a fracture caused by a fall from a standing or lower position. (2) There were clinical symptoms such as low back pain within 6 months before the bone mineral density scans. (3) We evaluated the clinical signs of osteoporotic vertebral fracture by X-ray and MRI. On the X-ray film, the signs include the reduction of the height of the anterior, middle and posterior dimensions of the vertebral body >20% of the vertebral body's area in a lateral-view image of the thoracic/lumbar spine; Or there are endplate deformities, lack of parallelism, and general changes in appearance relative to adjacent vertebrae. On MRI images, the signs of osteoporotic vertebral compression fractures included newly found bone marrow edema on sagittal T1-weighted images and fat-suppressed T2-weighted images. (4) There is no history or evidence of metabolic bone disease or cancer [19]. BFH-OSTM In this model [15], only weight and history of the previous fracture were selected in the ultimate model. The new model had been named the Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male (BFH-OSTM) [20]. The model was calculated using the following formula: $$ \left(\mathrm{Body}\ \mathrm{weight},\mathrm{kg}-\mathrm{history}\ \mathrm{of}\ \mathrm{previous}\ \mathrm{fragility}\ \mathrm{fracture}\ \left[\mathrm{no}=0,\mathrm{yes}=1\right]\times 7\right). $$ Two key factors of our model were collected as follows: medical records were reviewed to record body weight and previous fragility history information at admission and information on the questionnaire was collected in the fracture group and the non-fracture group. We defined a fragility fracture as men ≥50 years old with no obvious history of trauma or fracture history of low energy trauma. A low-energy traumatic fracture is defined as a fracture caused by a fall from a standing or lower position. Fragility fractures occur most commonly in the spine (vertebrae), hip (proximal femur) and wrist (distal radius). They may also occur in the arm (humerus), pelvis, ribs and other bones. Fragility fractures are defined as fractures associated with low BMD and include clinical spine, forearm, hip and shoulder fractures. For example, a man whose body weight was 65 kg with a previous fragility fracture would have an index of: 65 − 1 × 7 = 58. BMD measurements All the enrolled men accepted the BMD measurement of the hip and spine by DXA at our Hospital. We use the Wi densitometer (Hologic Inc., Bedford, MA, USA) to measure the BMD of the left lumbar vertebrae (L1~L4), the left femur and total hip. In order to standardize the measurement, quality control is carried out every day before the initial measurement. These men had short-term repeatability values of less than 1% at the lumbar spine, femoral neck and total hip [21]. Throughout the study, all DXA scans were performed by the same experienced and qualified technical expert. The BMD T score was calculated automatically by the system. T score refers to the average bone mineral density of young Chinese men: L1~L4, 1.017 ± 0.117 g/cm2, femoral neck 0.909 ± 0.116 g/cm2, total hip 0.993 ± 0.121 g/cm2. OSTA score The osteoporosis self-assessment tool for Asians (OSTA) was first developed by Koh et al. in 2001 [9]. In the final formula, only age and body weight are selected as influencing factors, and the formula is as follows [10]: $$ \mathrm{OSTA}\ \mathrm{score}=\left(\mathrm{body}\ \mathrm{weight},\mathrm{kg}-\mathrm{age},\mathrm{years}\right)\times 0.2. $$ Then only the integer part is taken as the result of the calculation. For example, a 70-year – old man whose body weight was 75 kg would have an index of: (75 − 70) × 0.2 = 1. FRAX score FRAX is a computer algorithm based on the use of clinical risk factors (http://www.sheffield.ac.uk/FRAX) and is also the most widely validated and widely used tool for male and female fracture risk assessment [22]. It combines the risk of fracture and the risk of death, and constructs four models to calculate the probability of fracture. The 10-year fracture probability can not only be obtained by clinical risk factors, but also can be more accurately predicted by combining with the bone mineral density of the femoral neck. As the probability of fracture also varies significantly in different regions of the world, the FRAX model is calibrated and modified according to the epidemiological data of fracture and death in diverse regions. We chose the version of FRAX in the Chinese mainland. Because we focused on the ability of FRAX tools to identify and predict painful osteoporotic vertebral compression fractures in this study, we selected a model for the likelihood of osteoporotic fractures within 10 years without bone mineral density measurements [23]. In this study, we use the Major osteoporotic data of FRAX (without BMD). In this study, the descriptive statistics of demographic and baseline characteristics are expressed as the average ± standard deviation of continuous variables or the percentage of classified variables. The measured data are described as mean ± standard deviation in the normal distribution, otherwise described as median and quartile range. A Chi-square test was used to compare the counting data. The differences of BMD, OSTA, FRAX score and BFH-OSTM between the Fracture group and the non-Fracture group were tested by t-test for two independent samples (if it was normal distribution and uniform variance), while the non-normal distribution was tested by a non-parametric test. The effectiveness of four tools for identifying OVCF was evaluated by receiver-operating characteristic (ROC) curve analysis, which compared sensitivity to (1-specificity). The predicted values of the (AUC) determination tool based on the area under the ROC curve are as follows [24]: AUC < 0.5; less predictive, 0.5 < AUC < 0.7; moderately predictive, 0.7 < AUC < 0.9; highly predictive, 0.9 < AUC < 1; and perfectly predictive, AUC = 1. Build the ROC curve and estimate AUC and its 95% confidence interval (CI) using SPSS version 26.0 and MedCalc version 11.5.0.0. The p value <0.05 was considered statistically significant. A sample of 897 men aged 50 initially enrolled in the study. According to the inclusion and exclusion criteria, a total of 51 subjects were excluded from the study, so 846 subjects were analyzed (Table 1). These included 171 men who suffered POVCF within 6 months before the BMD measurement (Fracture group) and 675 healthy community-based men (Non-fracture group). Table 1 Summary of descriptive characteristics of Fracture Group and Non-Fracture Group Between the Fracture group and the non-Fracture group, there were considerable differences in weight, height, previous fracture, family history and BMDs of the femoral neck, total hip and L1~L4 [25, 26]. In particular, the body mass index was highest in the Non-fracture group relative to the Fracture group. Men in the Fracture group experienced more fractures and had a lower average BMDs than men in the control group. Compared to the control group, the BMI of the Fracture group range from 14.17 kg/m2 to 31.6 kg/m2, and that of the non-Fracture group was between 17.72 kg/m2 and 37.18 kg/m2. Previous fractures accounted for 20.21% of the total sample (n = 171), and 5.32% of the subjects had a family history of osteoporosis (n = 45). At present, smokers are accountable for 47.04% of the study population (n = 398), and 42.79% of people drink more than 30 g of alcohol per day (n = 362) [22]. There were significant differences in weight, height, BMI, previous fracture, BMD, family history, current smokers and alcohol over 30 g/d between the fracture group and the non-fracture group (p < 0.05). BMD T-scores and OXTA and FRAX indices and BFH-OSTM There were significant differences in BMD T-score, FRAX, OSTA and BFH-OSTM scores between the Fracture group and the non-Fracture group (Table 2). The BMD scores of total hip, femoral neck and L1~L4, and the BFH-OSTM score, and OSTA score in the Fracture group were significantly lower than those in the non-Fracture group, while the FRAX index was significantly higher than that in the non-Fracture group. Table 2 BMD T-score, OSTA, FRAX and BFHOSTM scores of Fracture Group and Non-Fracture Group BMD T-scores For men in the Fracture group, only 50.9% were found to have osteoporosis (WHO criteria) which has BMD T-scores below −2.5 at the femoral neck, or total hip, or lumbar spine (Fig. 2). For the non-Fracture group, these percentages were only 2.5%, 1.6%, and 8.9%, respectively. The AUC of the BMD for estimating the risk of OVCF at the femoral neck, hip, and lumbar spine were 0.779, 0.776 and 0.650 with optimal cutoffs of −1.4, −1.4 and −0.7. Proportions of BMD T-scores at different sites in the Fracture (1) and Non-Fracture (2) groups Evaluation and comparison of BMD T-score, OSTA, FRAX and BFH-OSTM First of all, our BFH-OSTM model performs better in it, with an AUC value of 0.841 (95%CI: 0.815–0.865 Z = 21.942, p < 0.001), with a cutoff of 69, and a sensitivity and specificity of 81.29% and 70.67%. The AUC value of the FRAX tool (without BMD) is 0.796 (95%CI: 0.768–0.823 Z = 14.384, p < 0.001), the cutoff value is 2.9%, and the sensitivity and specificity are 74.85% and 78.52%, respectively. Compared with FRAX, BFH-OSTM model may be a more effective tool to determine the risk of PNOVCF in this elderly Chinese men. The Z-value between BFH-OSTM and FRAX are 2.068 and the p-value are 0.0387 < 0.05, so the difference is statistically significant (Fig. 5). The AUC values of BMD at femoral neck, hip and lumbar vertebrae for the diagnosis of OVCF were 0.779 (95%CI: 0.750–0.807, Z = 13.842, p < 0.001), 0.776 (95%CI: 0.746–0.803, Z = 12.611, p < 0.001) and 0.650 (95%CI: 0.616–0.682, Z = 6.577, p < 0.001), respectively. The cutoff values are −1.4, −1.4 and −0.7. For the OSTA model, the AUC value was 0.752 (95%CI: 0.721–0.781, Z = 11.085%, p < 0.001), and when the cutoff value was −1.2, the sensitivity and specificity were 50.88% and 89.04%. (Figs. 3 and 4) ROC curve of the BMD measurement at different sites for identifying PNOVCF with optimal cutoff value. BMD bone mineral density, OSTA Osteoporosis self-Assessment Tool for Asians, FRAX fracture risk assessment tool, BFHOSTM Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male AUC, sensitivity and specificity values of the FRAX, BMD T-score, OSTA and BFH-OSTM for identifying PNOVCF. Comparison of different AUCs (BMD T-score, OSTA, FRAX and BFHOSTM for identifying OVCF). *Optimal FRAX cutoff; +LR: positive likelihood ratio; −LR: negative likelihood ratio. BMD bone mineral density, OSTA Osteoporosis self-Assessment Tool for Asians, FRAX fracture risk assessment tool, BFH-OSTM Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male, AUC area under the receiver operating characteristic curve, OVCF osteoporotic vertebral compression fracture This cross sectional study compared the validity of BMD/OSTA/FRAX (without BMD) and BFH-OSTM for identifying POVCF in Chinese men aged 50 and over. According to diagnostic criteria of POVCF issued by our criteria, these tools are suitable for comparison between healthy men and patients with PNOVCF. Our study showed that for BMD, the AUC values for assessing the risk of PNOVCF of the femoral neck, hip and lumbar vertebral were 0.779, 0.776 and 0.650, respectively, and the corresponding optimal critical values were −1.4, −1.4 and −0.7. This indicates that BMD measurements of the femoral neck, hip and lumbar vertebral are only moderately predictive. Among the three parts, the identifiability of the lumbar spine was higher, and its sensitivity was 79.53%. As a screening method, the specificity of total hip was 77.33%. However, our results show that the sensitivity of BMD in assessing the risk of fracture is moderate, with a sensitivity of 71.93% in the femoral neck, 64.91% in the total hip and 79.53% in the lumbar spine, respectively, and the specificity in the lumbar spine is only 44.3%, so the specificity is less acceptable in the lumbar spine. Our previous research showed that the AUC for estimating the risk of fracture at the femoral neck, hip, and lumbar spine were 0.706, 0.711, and 0.706, respectively, with optimal cutoffs of −2.5, −1.4, and −1.6, with had a sensitivity of 42.34%, 67.57%, 52.25% and specificity of 89.87%, 65.45%, 77.14%. Due to the high cost of central dual-energy X-ray absorptiometry, BMD is not suitable as a preliminary screening tool in primary hospitals. In the fracture group, 17.5%, 23.4% and 38.0% of the patients had normal BMD at the three parts of femoral neck, total hip and lumbar spine respectively, so the value of bone mineral density as a predictor of PNOVCF was limited. Therefore, we urgently need a screening tool with higher accuracy and simplicity than bone mineral density measurement to identify PNOVCF. Because there is no such equipment in primary hospitals, this tool should not be used as a screening tool. As showed in Table 1, the average weight, height and BMI of the Fracture group were lower than those of the non-Fracture group. Patients in the Fracture group experienced more fragility fractures than the non-Fracture group, so we think that low weight and previous fragility fracture history of osteoporosis are also risk factors for PNOVCF. This is consistent with the traditional clinical view [27]. If the height is lower than that of the general healthy people, it may be due to the physiological characteristics of the spine, the vertebral body of the osteoporosis patient is more likely to be compressed, and the morphological changes of the vertebral body and intervertebral space lead to the shortening of the length of the spine [28]. The calculation of OSTA is very simple, based only on the two influencing factors of age and body weight, which is simpler than BMD measurement, and is suitable for the risk assessment of osteoporosis in postmenopausal Asian women. In recently published reports, data show that the OSTA index can also be used to predict the risk of osteoporosis in elderly Chinese men, but the prediction of new osteoporotic vertebral compression fractures in this population has not been confirmed [29]. Our previous and current study shows that there is a significant difference in the distribution of OSTA score between the Fracture group and the non-Fracture group. Its ability to recognize OVCF (AUC = 0.752) is slightly lower than that of the femoral neck and hip, but better than that of the lumbar spine. However, the disadvantage that cannot be ignored is low sensitivity (50.88%). For screening tools, we focus more on high sensitivity than on high specificity, as fewer patients will undergo unnecessary treatment or invasive tests [30]. so the OSTA index may not be applicable to the prediction of PNOVCF in Chinese elderly men, which run counter to the purpose of our screening. The FRAX algorithm was developed to assess the risk of osteoporotic fractures of the hip, spine, distal forearm and shoulder in 10 years and has been recommended by the World Health Organization. Because it does not need BMD measurement, the patient data collection is more comprehensive, so its ability to distinguish PNOVCF is indeed stronger than BMD measurement and OSTA score [23]. The AUC value of FRAX in the diagnosis of OVCF risk was 0.796, and its sensitivity and specificity were 74.85% and 78.52% respectively at the optimal critical value. Among the tools are tested in the present study, FRAX had a higher discriminating ability for identifying PNOVCF, followed by OSTA and BMD. In clinical practice, FRAX needs to include many risk factors and be equipped with corresponding hardware and software, so it has certain limitations [26, 31]. Our BFH-OSTM is a calculation model based on multiple regression analysis of data from multiple centers, and two risk factors, body weight and previous fragility fracture history are selected. Compared with FRAX, BFH-OSTM model may be an effective tool to determine the risk of PNOVCF in this elderly Chinese men population. The performance value of BFH-OSTA is better than FRAX (p < 0.05). The formula is (body weight [kg] − history of previous fracture [no = 0, yes = 1] × 7). BFH-OSTM can not only predict osteoporosis, but also can be used for early detection of PNOVCF, and the value of cutoff may be different. The optimal cutoff value for identifying PNOVCF is 69, and the sensitivity and specificity are 81.29% and 70.67%, respectively, and the area under the curve AUC is 0.841. The ability to identify PNOVCF is significantly improved, and the sensitivity is also higher (Fig. 5). Compared with FRAX, the predictive value of our BFH-OSTM is obviously better than that of FRAX. We think that the cutoff value may be slightly different in different people and regions, which need to be further confirmed. Comparison of different AUCs (FRAX and BFHOSTM for identifying OVCF) Among the tools tested in this study, BFH-OSTM had the highest discriminative validity in identifying PNOVCF in the elderly men population, with approximate high identification, followed by FRAX, with better BMD in the femoral neck and hip than OSTA. In this study, compared with OSTA and BMD T scores, FRAX score (without BMD) includes more related risk or protective factors and adapts to local conditions, so it has higher identification value, but its universality also limits its promotion and application. Despite the availability of computer software to simplify the calculation. However, the complexity of data collection also makes clinical evaluation difficult, which is not suitable for large-scale screening and community application. Our BFH-OSTM model is a multi-factor analysis model, which accurately captures the two influencing factors of the most critical body weight and previous brittle fracture history, which is simple enough, but at the same time obtains the best predictive value of PNOVCF and ensures the sensitivity and specificity of screening, so it is easy to popularize and apply [32]. Our research presents several noteworthy advantages. First of all, our research is a cross-sectional study, so the information obtained is not retrospective. Second, due to the rigor of data collection, our study shows that the age and weight of subjects are recorded while measuring bone mineral density, and all diagnoses and results are made by experienced physicians. Third, we imposed strict inclusion and exclusion criteria to exclude the effects of other factors. Finally, our research has important clinical significance; it can help inexperienced doctors in primary hospitals or community health service centers to detect PNOVCF as early as possible. More importantly, there is no need for the learning curve, compared with the traditionally recognized ability of FRAX to assess the risk of fracture, BFH-OSTM is a more simple, direct and effective model for clinicians. However, current research still has some limitations. First of all, we only collected the subjects recruited from a hospital, which is a single-center study, so we cannot fully represent the entire demographic data of the people's Republic of China. Secondly, we suggest that more centers should participate to improve big data and verify the accuracy of the model at the same time. Our study found that neither BMD nor OSTA is sufficient to identify the risk of PNOVCF in clinical practice. Compared to FRAX, BFH-OSTM model may be a more simple and effective tool to determine the risk of PNOVCF in this elderly Chinese men population. The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Kanis JA, Johnell O, Oden A, et al. Long-term risk of osteoporotic fracture in Malmö. Osteoporos Int. 2000;11:669–74. Johnston CB, Dagar M. Osteoporosis in older adults. Med Clin North Am. 2020;104:873–84. Johnell O, Kanis JA. An estimate of the worldwide prevalence and disability associated with osteoporotic fractures. Osteoporos Int. 2006;17:1726–33. Goldstein CL, Chutkan NB, Choma TJ, Orr RD. Management of the elderly with vertebral compression fractures. Neurosurgery. 2015;77(Suppl 4):S33–45. Jin YZ, Lee JH, Xu B, Cho M. Effect of medications on prevention of secondary osteoporotic vertebral compression fracture, non-vertebral fracture, and discontinuation due to adverse events: a meta-analysis of randomized controlled trials. BMC Musculoskelet Disord. 2019;20:399. Rud B, Hilden J, Hyldstrup L, Hróbjartsson A. The Osteoporosis Self-Assessment Tool versus alternative tests for selecting postmenopausal women for bone mineral density assessment: a comparative systematic review of accuracy. Osteoporos Int. 2009;20:599–607. Cicala D, Briganti F, Casale L, et al. Atraumatic vertebral compression fractures: differential diagnosis between benign osteoporotic and malignant fractures by MRI. Musculoskelet Surg. 2013;97(Suppl 2):S169–79. Lane JM, Russell L, Khan SN. Osteoporosis. Clin Orthop Relat Res. 2000;(372):139-50. Koh LK, Sedrine WB, Torralba TP, et al. A simple tool to identify asian women at increased risk of osteoporosis. Osteoporos Int. 2001;12:699–705. Subramaniam S, Chan CY, Soelaiman IN, et al. The performance of osteoporosis self-assessment tool for Asians (OSTA) in identifying the risk of osteoporosis among Malaysian population aged 40 years and above. Arch Osteoporos. 2019;14:117. Fei Q, Lin J, Yang Y, et al. Validation of three tools for identifying painful new osteoporotic vertebral fractures in older Chinese men: bone mineral density, Osteoporosis Self-Assessment Tool for Asians, and fracture risk assessment tool. Clin Interv Aging. 2016;11:461–9. Kanis JA, Johnell O, Oden A, Johansson H, McCloskey E. FRAX and the assessment of fracture probability in men and women from the UK. Osteoporos Int. 2008;19:385–97. Johnell O, Kanis J. Epidemiology of osteoporotic fractures. Osteoporos Int. 2005;16(Suppl 2):S3–7. Lane NE. Epidemiology, etiology, and diagnosis of osteoporosis. Am J Obstet Gynecol. 2006;194:S3–11. Lin J, Yang Y, Zhang X, et al. BFH-OSTM, a new predictive screening tool for identifying osteoporosis in elderly Han Chinese males. Clin Interv Aging. 2017;12:1167–74. Zhang X, Lin J, Yang Y, et al. Comparison of three tools for predicting primary osteoporosis in an elderly male population in Beijing: a cross-sectional study. Clin Interv Aging. 2018;13:201–9. Hsiao PC, Chen TJ, Li CY, et al. Risk factors and incidence of repeat osteoporotic fractures among the elderly in Taiwan: a population-based cohort study. Medicine (Baltimore). 2015;94:e532. Xu J, Lin J, Li J, Yang Y, Fei Q. "Targeted percutaneous vertebroplasty" versus traditional percutaneous vertebroplasty for osteoporotic vertebral compression fracture. Surg Innov. 2019;26:551–9. Keaveny TM, Clarke BL, Cosman F, et al. Biomechanical computed tomography analysis (BCT) for clinical assessment of osteoporosis. Osteoporos Int. 2020;31:1025–48. Ma Z, Yang Y, Lin J, et al. BFH-OST, a new predictive screening tool for identifying osteoporosis in postmenopausal Han Chinese women. Clin Interv Aging. 2016;11:1051–9. Camacho PM, Petak SM, Binkley N, et al. American Association of Clinical Endocrinologists/American College of Endocrinology clinical practice guidelines for the diagnosis and treatment of postmenopausal osteoporosis-2020 update. Endocr Pract. 2020;26:1–46. Wang J, Wang X, Fang Z, Lu N, Han L. The effect of FRAX on the prediction of osteoporotic fractures in urban middle-aged and elderly healthy Chinese adults. Clinics (Sao Paulo). 2017;72:289–93. Johansson H, Azizieh F, Al Ali N, et al. FRAX- vs. T-score-based intervention thresholds for osteoporosis. Osteoporos Int. 2017;28:3099–105. Parikh R, Mathai A, Parikh S, Chandra Sekhar G, Thomas R. Understanding and using sensitivity, specificity and predictive values. Indian J Ophthalmol. 2008;56:45–50. Tebé C, del Río LM, Casas L, et al. Risk factors for fragility fractures in a cohort of Spanish women. Gac Sanit. 2011;25:507–12. Wang Y, Hao YJ, Deng XR, et al. Risk factors for bone mineral density changes in patients with rheumatoid arthritis and fracture risk assessment. Beijing Da Xue Xue Bao Yi Xue Ban. 2015;47:781–6. Krege JH, Kendler D, Krohn K, et al. Relationship between vertebral fracture burden, height loss, and pulmonary function in postmenopausal women with osteoporosis. J Clin Densitom. 2015;18:506–11. Kantor SM, Ossa KS, Hoshaw-Woodard SL, Lemeshow S. Height loss and osteoporosis of the hip. J Clin Densitom. 2004;7:65–70. Luthman S, Widén J, Borgström F. Appropriateness criteria for treatment of osteoporotic vertebral compression fractures. Osteoporos Int. 2018;29:793–804. Greiner M, Pfeiffer D, Smith RD. Principles and practical application of the receiver-operating characteristic analysis for diagnostic tests. Prev Vet Med. 2000;45:23–41. Liu S, Chen R, Ding N, et al. Setting the new FRAX reference threshold without bone mineral density in Chinese postmenopausal women. J Endocrinol Invest. 2021;44:347–52. Siris ES, Adler R, Bilezikian J, et al. The clinical diagnosis of osteoporosis: a position statement from the National Bone Health Alliance Working Group. Osteoporos Int. 2014;25:1439–43. All methods of our study were performed in accordance with the relevant regulations in the methods section. This work was supported by Beijing health system high- level health technical personnel training project (NO: 2015-3-009). The first two authors contributed equally to this work. Department of Orthopedics, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China Ning An, Ji Sheng Lin & Qi Fei Ning An Ji Sheng Lin Qi Fei Each author made substantial contributions to this work. NA, JSL, and QF contributed to the conception and design of the work. NA, JSL contributed to the acquisition of study data. NA contributed to the analysis and interpretation of data. All authors have drafted the work or substantively revised it, and all authors read and approved the final manuscript. Correspondence to Qi Fei. This cross sectional study was approved by the Ethics Committee of Beijing Friendship Hospital, Capital Medical University, and all subjects provided signed informed consent. All participants provided informed consent prior to commencing study involvement. An, N., Lin, J.S. & Fei, Q. Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male (BFH-OSTM) vs Fracture Risk Assessment Tool (FRAX) for identifying painful new osteoporotic vertebral fractures in older Chinese men: a cross-sectional study. BMC Musculoskelet Disord 22, 596 (2021). https://doi.org/10.1186/s12891-021-04476-2 Osteoporosis Self-Assessment Tool for Asians Fracture risk assessment tool Beijing Friendship Hospital Osteoporosis Self-Assessment Tool for Elderly Male
CommonCrawl
\begin{definition}[Definition:Triangle (Geometry)/Acute] An '''acute triangle''' is a triangle in which all three of the vertices are acute angles. \end{definition}
ProofWiki
The Journal of Agricultural Science (21) The Journal of Laryngology & Otology (11) Animal Science (9) Publications of the Astronomical Society of Australia (3) BSAP Occasional Publication (2) Experimental Agriculture (2) High Power Laser Science and Engineering (2) Proceedings of the International Astronomical Union (2) Symposium - International Astronomical Union (2) Developmental Medicine and Child Neurology (1) Journal of Helminthology (1) Journal of Zoology (1) Oryx (1) Visual Neuroscience (1) World's Poultry Science Journal (1) BSAS (13) The Australian Society of Otolaryngology Head and Neck Surgery (11) International Astronomical Union (6) The Paleontological Society (2) American Academy of Cerebral and Developmental Medicine (1) Fauna & Flora International - Oryx (1) World's Poultry Science Association (1) Cambridge Studies in Biological and Evolutionary Anthropology (1) LO04: Canadian best practice diagnostic algorithm for acute aortic syndrome R. Ohle, S. McIsaac, J. Yan, K. Yadav, P. Jetty, R. Atoui, N. Fortino, B. Wilson, N. Coffey, T. Scott, A. Cournoyer, F. Rubens, D. Savage, D. Ansell, J. Middaugh, A. Gupta, B. Bittira, Y. Callaway, S. Bignucolo, B. Mc Ardle, E. Lang Journal: Canadian Journal of Emergency Medicine / Volume 21 / Issue S1 / May 2019 Published online by Cambridge University Press: 02 May 2019, pp. S7-S8 Introduction: Acute aortic syndrome (AAS) is a time sensitive aortic catastrophe that is often misdiagnosed. There are currently no Canadian guidelines to aid in diagnosis. Our goal was to adapt the existing American Heart Association (AHA) and European Society of Cardiology (ESC) diagnostic algorithms for AAS into a Canadian evidence based best practices algorithm targeted for emergency medicine physicians. Methods: We chose to adapt existing high-quality clinical practice guidelines (CPG) previously developed by the AHA/ESC using the GRADE ADOLOPMENT approach. We created a National Advisory Committee consisting of 21 members from across Canada including academic, community and remote/rural emergency physicians/nurses, cardiothoracic and cardiovascular surgeons, cardiac anesthesiologists, critical care physicians, cardiologist, radiologists and patient representatives. The Advisory Committee communicated through multiple teleconference meetings, emails and a one-day in person meeting. The panel prioritized questions and outcomes, using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to assess evidence and make recommendations. The algorithm was prepared and revised through feedback and discussions and through an iterative process until consensus was achieved. Results: The diagnostic algorithm is comprised of an updated pre test probability assessment tool with further testing recommendations based on risk level. The updated tool incorporates likelihood of an alternative diagnosis and point of care ultrasound. The final best practice diagnostic algorithm defined risk levels as Low (0.5% no further testing), Moderate (0.6-5% further testing required) and High ( >5% computed tomography, magnetic resonance imaging, trans esophageal echocardiography). During the consensus and feedback processes, we addressed a number of issues and concerns. D-dimer can be used to reduce probability of AAS in an intermediate risk group, but should not be used in a low or high-risk group. Ultrasound was incorporated as a bedside clinical examination option in pre test probability assessment for aortic insufficiency, abdominal/thoracic aortic aneurysms. Conclusion: We have created the first Canadian best practice diagnostic algorithm for AAS. We hope this diagnostic algorithm will standardize and improve diagnosis of AAS in all emergency departments across Canada. Role of magnetic field evolution on filamentary structure formation in intense laser–foil interactions HPL_EP HEDP and High Power Laser 2018 M. King, N. M. H. Butler, R. Wilson, R. Capdessus, R. J. Gray, H. W. Powell, R. J. Dance, H. Padda, B. Gonzalez-Izquierdo, D. R. Rusby, N. P. Dover, G. S. Hicks, O. C. Ettlinger, C. Scullion, D. C. Carroll, Z. Najmudin, M. Borghesi, D. Neely, P. McKenna Journal: High Power Laser Science and Engineering / Volume 7 / 2019 Published online by Cambridge University Press: 13 March 2019, e14 Filamentary structures can form within the beam of protons accelerated during the interaction of an intense laser pulse with an ultrathin foil target. Such behaviour is shown to be dependent upon the formation time of quasi-static magnetic field structures throughout the target volume and the extent of the rear surface proton expansion over the same period. This is observed via both numerical and experimental investigations. By controlling the intensity profile of the laser drive, via the use of two temporally separated pulses, both the initial rear surface proton expansion and magnetic field formation time can be varied, resulting in modification to the degree of filamentary structure present within the laser-driven proton beam. On the origin of the circular hydraulic jump in a thin liquid film Rajesh K. Bhagat, N. K. Jha, P. F. Linden, D. Ian Wilson Journal: Journal of Fluid Mechanics / Volume 851 / 25 September 2018 Published online by Cambridge University Press: 31 July 2018, R5 Print publication: 25 September 2018 This study explores the formation of circular thin-film hydraulic jumps caused by the normal impact of a jet on an infinite planar surface. For more than a century, it has been believed that all hydraulic jumps are created due to gravity. However, we show that these thin-film hydraulic jumps result from energy loss due to surface tension and viscous forces alone. We show that, at the jump, surface tension and viscous forces balance the momentum in the liquid film and gravity plays no significant role. Experiments show no dependence on the orientation of the surface and a scaling relation balancing viscous forces and surface tension collapses the experimental data. A theoretical analysis shows that the downstream transport of surface energy is the previously neglected critical ingredient in these flows, and that capillary waves play the role of gravity waves in a traditional jump in demarcating the transition from the supercritical to subcritical flow associated with these jumps. The use of by-products in animal feeds P. N. Wilson Journal: BSAP Occasional Publication / Volume 3 / 1980 Over the last 30 years there has been spectacular growth in the UK broiler industry (Richardson, 1976), intensification in the UK pig industry and a move to larger herds and higher yields in the cattle industry. These trends have meant that farmers have bought more livestock feed both as compounds and as straights. Parallel to these changes, the move from 'target' to 'least cost' formulation by the compounder and computerized home mixer has increased the ability to deal with different raw materials and to utilize these successfully in compounded diets (Wilson, 1975). In spite of all these technical advances, livestock still depend on large quantities of cereals and other raw materials which are potential food for man (Wilson, 1977) as illustrated in Table 1. This is in spite of the fact that, over the past 15 years, the general trend-line in the 'carry over' stocks of world grain has been downwards (Brown, 1977). It follows that, in considering future feeding policies for livestock, there are good reasons why prudent steps should be taken to find and utilize alternative sources of both energy and protein for animal feeds. Holland has been more successful in this respect than the UK (De Boer, 1978) as instanced by the increasing imports of cassava which in part replaces European-grown barley and wheat (Walters, 1978). Research and development implications for the future P. N. Wilson, A. B. Lawrence Published online by Cambridge University Press: 27 February 2018, pp. 95-106 Choice of breed A model should be developed to allow the selection of the optimal breed on the basis of production traits and economic efficiency. Choice of selection method New breeding schemes to replace the current widespread use of progeny testing should be examined critically and, in particular, breeding schemes incorporating multiple ovulation and embryo transplant should be assessed. Identification of marker traits Research to evaluate the relevance of marker traits to milk quality should be pursued. Genetical engineering Long-term prospects of applying genetical engineering techniques to cattle should be assessed in terms of desk studies. Nutritional manipulation of milk fat The biochemical and metabolic aspects of lipid protection in the rumen should be examined further. Nutritional manipulation of milk protein Further studies should be undertaken to examine the effects and possible benefits of protein and specific amino acid protection. New milk products Work should be conducted to increase the range of marketable products of high added value, particularly new types of cheese for export. Alleged relationship between milk fat and coronary heart disease (CHD) The alleged causal relationship between dietary fat and CHD should be examined critically, particularly the definition of safe levels of serum cholestrol in man. Initial Radio Observations of SN1987a in the Large Magellanic Cloud A. J. Turtle, D. Campbell-Wilson, J. D. Bunton, D. L. Jauncey, M. J. Kesteven, R. N. Manchester, R. P. Norris, M. C. Storey, G. L. White, J. E. Reynolds, D. F. Malin Journal: Symposium - International Astronomical Union / Volume 129 / 1988 Published online by Cambridge University Press: 03 August 2017, p. 189 A prompt radio burst has been observed from the supernova 1987a in the Large Magellanic Cloud. Observations were made at 0.843, 1.415, 2.29, and 8.41 GHz. At frequencies around 1 GHz, the peak flux density reached about 150 mJy and occurred within four days of the supernova. This event may be a weak precursor to a major radio outburst of the type previously observed in other extragalactic supernovae. Radio monitoring of the supernova is continuing at each of the above frequencies, and coordination is underway of a southern hemisphere VLBI array to map the radio outburst region as it expands. Differential astrometry carried out on prime-focus plates taken with the Anglo-Australian telescope indicates that the component, star 1, of Sanduleak's star SK-69202 is within 0.05 ± 0.13 arcsec of the supernova. Capacity building for conservation: problems and potential solutions for sub-Saharan Africa M. J. O'Connell, O. Nasirwa, M. Carter, K. H. Farmer, M. Appleton, J. Arinaitwe, P. Bhanderi, G. Chimwaza, J. Copsey, J. Dodoo, A. Duthie, M. Gachanja, N. Hunter, B. Karanja, H. M. Komu, V. Kosgei, A. Kuria, C. Magero, M. Manten, P. Mugo, E. Müller, J. Mulonga, L. Niskanen, J. Nzilani, M. Otieno, N. Owen, J. Owuor, S. Paterson, S. Regnaut, R. Rono, J. Ruhiu, J. Theuri Njoka, L. Waruingi, B. Waswala Olewe, E. Wilson Journal: Oryx / Volume 53 / Issue 2 / April 2019 To achieve their conservation goals individuals, communities and organizations need to acquire a diversity of skills, knowledge and information (i.e. capacity). Despite current efforts to build and maintain appropriate levels of conservation capacity, it has been recognized that there will need to be a significant scaling-up of these activities in sub-Saharan Africa. This is because of the rapid increase in the number and extent of environmental problems in the region. We present a range of socio-economic contexts relevant to four key areas of African conservation capacity building: protected area management, community engagement, effective leadership, and professional e-learning. Under these core themes, 39 specific recommendations are presented. These were derived from multi-stakeholder workshop discussions at an international conference held in Nairobi, Kenya, in 2015. At the meeting 185 delegates (practitioners, scientists, community groups and government agencies) represented 105 organizations from 24 African nations and eight non-African nations. The 39 recommendations constituted six broad types of suggested action: (1) the development of new methods, (2) the provision of capacity building resources (e.g. information or data), (3) the communication of ideas or examples of successful initiatives, (4) the implementation of new research or gap analyses, (5) the establishment of new structures within and between organizations, and (6) the development of new partnerships. A number of cross-cutting issues also emerged from the discussions: the need for a greater sense of urgency in developing capacity building activities; the need to develop novel capacity building methodologies; and the need to move away from one-size-fits-all approaches. Developing one-dimensional implosions for inertial confinement fusion science HEDP and HPL 2016 J. L. Kline, S. A. Yi, A. N. Simakov, R. E. Olson, D. C. Wilson, G. A. Kyrala, T. S. Perry, S. H. Batha, E. L. Dewald, J. E. Ralph, D. J. Strozzi, A. G. MacPhee, D. A. Callahan, D. Hinkel, O. A. Hurricane, R. J. Leeper, A. B. Zylstra, R. R. Peterson, B. M. Haines, L. Yin, P. A. Bradley, R. C. Shah, T. Braun, J. Biener, B. J. Kozioziemski, J. D. Sater, M. M. Biener, A. V. Hamza, A. Nikroo, L. F. Berzak Hopkins, D. Ho, S. LePape, N. B. Meezan, D. S. Montgomery, W. S. Daughton, E. C. Merritt, T. Cardenas, E. S. Dodd Published online by Cambridge University Press: 12 December 2016, e44 Experiments on the National Ignition Facility show that multi-dimensional effects currently dominate the implosion performance. Low mode implosion symmetry and hydrodynamic instabilities seeded by capsule mounting features appear to be two key limiting factors for implosion performance. One reason these factors have a large impact on the performance of inertial confinement fusion implosions is the high convergence required to achieve high fusion gains. To tackle these problems, a predictable implosion platform is needed meaning experiments must trade-off high gain for performance. LANL has adopted three main approaches to develop a one-dimensional (1D) implosion platform where 1D means measured yield over the 1D clean calculation. A high adiabat, low convergence platform is being developed using beryllium capsules enabling larger case-to-capsule ratios to improve symmetry. The second approach is liquid fuel layers using wetted foam targets. With liquid fuel layers, the implosion convergence can be controlled via the initial vapor pressure set by the target fielding temperature. The last method is double shell targets. For double shells, the smaller inner shell houses the DT fuel and the convergence of this cavity is relatively small compared to hot spot ignition. However, double shell targets have a different set of trade-off versus advantages. Details for each of these approaches are described. The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson Journal: Publications of the Astronomical Society of Australia / Volume 33 / 2016 Published online by Cambridge University Press: 09 September 2016, e042 We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope. Predictors of community-associated Staphylococcus aureus, methicillin-resistant and methicillin-susceptible Staphylococcus aureus skin and soft tissue infections in primary-care settings G. C. LEE, R. G. HALL, N. K. BOYD, S. D. DALLAS, L. C. DU, L. B. TREVIÑO, C. RETZLOFF, S. B. TREVIÑO, K. A. LAWSON, J. P. WILSON, R. J. OLSEN, Y. WANG, C. R. FREI Journal: Epidemiology & Infection / Volume 144 / Issue 15 / November 2016 Skin and soft tissue infections (SSTIs) due to Staphylococcus aureus have become increasingly common in the outpatient setting; however, risk factors for differentiating methicillin-resistant S. aureus (MRSA) and methicillin-susceptible S. aureus (MSSA) SSTIs are needed to better inform antibiotic treatment decisions. We performed a case-case-control study within 14 primary-care clinics in South Texas from 2007 to 2015. Overall, 325 patients [S. aureus SSTI cases (case group 1, n = 175); MRSA SSTI cases (case group 2, n = 115); MSSA SSTI cases (case group 3, n = 60); uninfected control group (control, n = 150)] were evaluated. Each case group was compared to the control group, and then qualitatively contrasted to identify unique risk factors associated with S. aureus, MRSA, and MSSA SSTIs. Overall, prior SSTIs [adjusted odds ratio (aOR) 7·60, 95% confidence interval (CI) 3·31–17·45], male gender (aOR 1·74, 95% CI 1·06–2·85), and absence of healthcare occupation status (aOR 0·14, 95% CI 0·03–0·68) were independently associated with S. aureus SSTIs. The only unique risk factor for community-associated (CA)-MRSA SSTIs was a high body weight (⩾110 kg) (aOR 2·03, 95% CI 1·01–4·09). Predicting the diagnosis of autism in adults using the Autism-Spectrum Quotient (AQ) questionnaire K. L. Ashwood, N. Gillan, J. Horder, H. Hayward, E. Woodhouse, F. S. McEwen, J. Findon, H. Eklund, D. Spain, C. E. Wilson, T. Cadman, S. Young, V. Stoencheva, C. M. Murphy, D. Robertson, T. Charman, P. Bolton, K. Glaser, P. Asherson, E. Simonoff, D. G. Murphy Journal: Psychological Medicine / Volume 46 / Issue 12 / September 2016 Published online by Cambridge University Press: 29 June 2016, pp. 2595-2604 Many adults with autism spectrum disorder (ASD) remain undiagnosed. Specialist assessment clinics enable the detection of these cases, but such services are often overstretched. It has been proposed that unnecessary referrals to these services could be reduced by prioritizing individuals who score highly on the Autism-Spectrum Quotient (AQ), a self-report questionnaire measure of autistic traits. However, the ability of the AQ to predict who will go on to receive a diagnosis of ASD in adults is unclear. We studied 476 adults, seen consecutively at a national ASD diagnostic referral service for suspected ASD. We tested AQ scores as predictors of ASD diagnosis made by expert clinicians according to International Classification of Diseases (ICD)-10 criteria, informed by the Autism Diagnostic Observation Schedule-Generic (ADOS-G) and Autism Diagnostic Interview-Revised (ADI-R) assessments. Of the participants, 73% received a clinical diagnosis of ASD. Self-report AQ scores did not significantly predict receipt of a diagnosis. While AQ scores provided high sensitivity of 0.77 [95% confidence interval (CI) 0.72–0.82] and positive predictive value of 0.76 (95% CI 0.70–0.80), the specificity of 0.29 (95% CI 0.20–0.38) and negative predictive value of 0.36 (95% CI 0.22–0.40) were low. Thus, 64% of those who scored below the AQ cut-off were 'false negatives' who did in fact have ASD. Co-morbidity data revealed that generalized anxiety disorder may 'mimic' ASD and inflate AQ scores, leading to false positives. The AQ's utility for screening referrals was limited in this sample. Recommendations supporting the AQ's role in the assessment of adult ASD, e.g. UK NICE guidelines, may need to be reconsidered. Commission 19: Rotation of the Earth (Rotation de la Terre) Nicole Capitaine, Véronique Dehant, G. Beutler, P. Brosche, A. Brzeziński, T. Fukushima, D. Gambis, R. Gross, J. Hefty, C. Huang, Z. Malkin, D. McCarthy, A. Poma, J. Ray, B. Richter, C. Ron, N. Sidorenkov, M. Soffel, C. Wilson, Ya. Yatskiv Journal: Transactions of the International Astronomical Union / Volume 25 / Issue 1 / 2002 Simultaneous Optical/Gamma-ray Observations of GRBs J. Greiner, W. Wensel, R. Hudec, M. Varady, P. Štěpán, P. Spruný, J. Florián, E.I. Moskalenko, A.V. Barabanov, R. Ziener, K. Birkle, N. Bade, S.B. Tritton, T. Ichikawa, G.J. Fishman, C. Kouveliotou, C.A. Meegan, W.S. Paciesas, R.B. Wilson Journal: International Astronomical Union Colloquium / Volume 151 / 1995 This status report presents details on the project to search for serendipitous time-correlated optical photographic observations of γ-ray bursters. The ongoing photographic observations at nine observatories are used to look for plates which have been exposed simultaneously with a γ-ray burst detected by BATSE and contain the burst position. The results for the third year of BATSE operation are presented. Modeling fluid flow in Medullosa, an anatomically unusual Carboniferous seed plant Jonathan P. Wilson, Andrew H. Knoll, N. Michele Holbrook, Charles R. Marshall Journal: Paleobiology / Volume 34 / Issue 4 / Fall 2008 Print publication: Fall 2008 Medullosa stands apart from most Paleozoic seed plants in its combination of large leaf area, complex vascular structure, and extremely large water-conducting cells. To investigate the hydraulic consequences of these anatomical features and to compare them with other seed plants, we have adapted a model of water transport in xylem cells that accounts for resistance to flow from the lumen, pits, and pit membranes, and that can be used to compare extinct and extant plants in a quantitative way. Application of this model to Medullosa, the Paleozoic coniferophyte Cordaites, and the extant conifer Pinus shows that medullosan tracheids had the capacity to transport water at volume flow rates more comparable to those of angiosperm vessels than to those characteristic of ancient and modern coniferophyte tracheids. Tracheid structure in Medullosa, including the large pit membrane area per tracheid and the high ratio of tracheid diameter to wall thickness, suggests that its xylem cells operated at significant risk of embolism and implosion, making this plant unlikely to survive significant water stress These features further suggest that tracheids could not have furnished significant structural support, requiring either that other tissues supported these plants or that at least some medullosans were vines. In combination with high tracheid conductivity, distinctive anatomical characters of Medullosa such as the anomalous growth of vascular cambium and the large number of leaf traces that enter each petiole base suggest vascular adaptations to meet the evapotranspiration demands of its large leaves. The evolution of highly efficient conducting cells dictates a need to supply structural support via other tissues, both in tracheid-based stem seed plants and in vessel-bearing angiosperms. Gravitationally Lensed CO and Dust at High Redshift: New LMT/GTM Images and Spectra of Sub-Millimeter Galaxies J. D. Lowenthal, K. Harrington, D. Berman, M. Yun, R. Cybulski, G. W. Wilson, I. Aretxaga, M. Chavez, V. De la Luz, N. Erickson, D. Ferrusca, A. Gallup, D. Hughes, A. Montaña, G. Narayanan, D. Sánchez-Argüelles, F. P. Schloerb, K. Souccar, E. Terlevich, R. Terlevich, M. Zeballos, J. A. Zavala Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S321 / March 2016 We have assembled a new sample of some of the most FIR-luminous galaxies in the Universe and have imaged them in 1.1 mm dust emission and measured their redshifts 1 < z < 4 via CO emission lines using the 32-m Large Millimeter Telescope / Gran Telescopio Milimétrico (LMT/GTM). Our sample of 31 submm galaxies (SMGs), culled from the Planck and Herschel all-sky surveys, includes 14 of the 21 most luminous galaxies known, with LFIR > 1014 L ⊙ and SFR > 104M⊙/yr. These extreme inferred luminosities – and multiple / extended 1.1 mm images – imply that most or all are strongly gravitationally lensed, with typical magnification μ ~ 10 × . The gravitational lensing provides two significant benefits: (1) it boosts the S/N, and (2) it allows investigation of star formation and gas processes on sub-kpc scales. Long-term mortality of hospitalized pneumonia in the EPIC-Norfolk cohort P. K. MYINT, K. R. HAWKINS, A. B. CLARK, R. N. LUBEN, N. J. WAREHAM, K.-T. KHAW, A. M. WILSON Journal: Epidemiology & Infection / Volume 144 / Issue 4 / March 2016 Little is known about cause-specific long-term mortality beyond 30 days in pneumonia. We aimed to compare the mortality of patients with hospitalized pneumonia compared to age- and sex-matched controls beyond 30 days. Participants were drawn from the European Prospective Investigation into Cancer (EPIC)-Norfolk prospective population study. Hospitalized pneumonia cases were identified from record linkage (ICD-10: J12-J18). For this study we excluded people with hospitalized pneumonia who died within 30 days. Each case identified was matched to four controls and followed up until the end June 2012 (total 15 074 person-years, mean 6·1 years, range 0·08–15·2 years). Cox regression models were constructed to examine the all-cause, respiratory and cardiovascular mortality using date of pneumonia onset as baseline with binary pneumonia status as exposure. A total of 2465 men and women (503 cases, 1962 controls) [mean age (s.d.) 64·5 (8·3) years] were included in the study. Between a 30-day to 1-year period, hazard ratios (HRs) of all-cause and cardiovascular mortality were 7·3 [95% confidence interval (CI) 5·4–9·9] and 5·9 (95% CI 3·5–9·7), respectively (with very few respiratory deaths within the same period) in cases compared to controls after adjusting for age, sex, asthma, smoking status, pack years, systolic and diastolic blood pressure, diabetes, physical activity, waist-to-hip ratio, prevalent cardiovascular and respiratory diseases. All outcomes assessed also showed increased risk of death in cases compared to controls after 1 year; respiratory cause of death being the most significant during that period (HR 16·4, 95% CI 8·9–30·1). Hospitalized pneumonia was associated with increased all-cause and specific-cause mortality beyond 30 days. A Theoretical Model for the Convection of Magnetic Flux in and near Sunspots F. Meyer, H. U. Schmidt, N. O. Weiss, P. R. Wilson Journal: Symposium - International Astronomical Union / Volume 56 / 1974 In this paper we investigate the physical processes that lead to the growth and decay of magnetic flux in and near sunspots. An initial phase of rapid growth is characterized by the emergence of magnetic flux from the deep convection zone. As the flux rope rises through the surface the magnetic field is swept to the junctions of the supergranular network where sunspots are formed. These flux concentrations follow the footpoints of the emergent flux rope as they rapidly move apart. Measurement of the angle, temperature and flux of fast electrons emitted from intense laser–solid interactions Energetic Electrons D. R. Rusby, L. A. Wilson, R. J. Gray, R. J. Dance, N. M. H. Butler, D. A. MacLellan, G. G. Scott, V. Bagnoud, B. Zielbauer, P. McKenna, D. Neely Journal: Journal of Plasma Physics / Volume 81 / Issue 5 / October 2015 Published online by Cambridge University Press: 13 July 2015, 475810505 High-intensity laser–solid interactions generate relativistic electrons, as well as high-energy (multi-MeV) ions and x-rays. The directionality, spectra and total number of electrons that escape a target-foil is dependent on the absorption, transport and rear-side sheath conditions. Measuring the electrons escaping the target will aid in improving our understanding of these absorption processes and the rear-surface sheath fields that retard the escaping electrons and accelerate ions via the target normal sheath acceleration (TNSA) mechanism. A comprehensive Geant4 study was performed to help analyse measurements made with a wrap-around diagnostic that surrounds the target and uses differential filtering with a FUJI-film image plate detector. The contribution of secondary sources such as x-rays and protons to the measured signal have been taken into account to aid in the retrieval of the electron signal. Angular and spectral data from a high-intensity laser–solid interaction are presented and accompanied by simulations. The total number of emitted electrons has been measured as $2.6\times 10^{13}$ with an estimated total energy of $12\pm 1~\text{J}$ from a $100~{\rm\mu}\text{m}$ Cu target with 140 J of incident laser energy during a $4\times 10^{20}~\text{W}~\text{cm}^{-2}$ interaction. By Mitchell Aboulafia, Frederick Adams, Marilyn McCord Adams, Robert M. Adams, Laird Addis, James W. Allard, David Allison, William P. Alston, Karl Ameriks, C. Anthony Anderson, David Leech Anderson, Lanier Anderson, Roger Ariew, David Armstrong, Denis G. Arnold, E. J. Ashworth, Margaret Atherton, Robin Attfield, Bruce Aune, Edward Wilson Averill, Jody Azzouni, Kent Bach, Andrew Bailey, Lynne Rudder Baker, Thomas R. Baldwin, Jon Barwise, George Bealer, William Bechtel, Lawrence C. Becker, Mark A. Bedau, Ernst Behler, José A. Benardete, Ermanno Bencivenga, Jan Berg, Michael Bergmann, Robert L. Bernasconi, Sven Bernecker, Bernard Berofsky, Rod Bertolet, Charles J. Beyer, Christian Beyer, Joseph Bien, Joseph Bien, Peg Birmingham, Ivan Boh, James Bohman, Daniel Bonevac, Laurence BonJour, William J. Bouwsma, Raymond D. Bradley, Myles Brand, Richard B. Brandt, Michael E. Bratman, Stephen E. Braude, Daniel Breazeale, Angela Breitenbach, Jason Bridges, David O. Brink, Gordon G. Brittan, Justin Broackes, Dan W. Brock, Aaron Bronfman, Jeffrey E. Brower, Bartosz Brozek, Anthony Brueckner, Jeffrey Bub, Lara Buchak, Otavio Bueno, Ann E. Bumpus, Robert W. Burch, John Burgess, Arthur W. Burks, Panayot Butchvarov, Robert E. Butts, Marina Bykova, Patrick Byrne, David Carr, Noël Carroll, Edward S. Casey, Victor Caston, Victor Caston, Albert Casullo, Robert L. Causey, Alan K. L. Chan, Ruth Chang, Deen K. Chatterjee, Andrew Chignell, Roderick M. Chisholm, Kelly J. Clark, E. J. Coffman, Robin Collins, Brian P. Copenhaver, John Corcoran, John Cottingham, Roger Crisp, Frederick J. Crosson, Antonio S. Cua, Phillip D. Cummins, Martin Curd, Adam Cureton, Andrew Cutrofello, Stephen Darwall, Paul Sheldon Davies, Wayne A. Davis, Timothy Joseph Day, Claudio de Almeida, Mario De Caro, Mario De Caro, John Deigh, C. F. Delaney, Daniel C. Dennett, Michael R. DePaul, Michael Detlefsen, Daniel Trent Devereux, Philip E. Devine, John M. Dillon, Martin C. Dillon, Robert DiSalle, Mary Domski, Alan Donagan, Paul Draper, Fred Dretske, Mircea Dumitru, Wilhelm Dupré, Gerald Dworkin, John Earman, Ellery Eells, Catherine Z. Elgin, Berent Enç, Ronald P. Endicott, Edward Erwin, John Etchemendy, C. Stephen Evans, Susan L. Feagin, Solomon Feferman, Richard Feldman, Arthur Fine, Maurice A. Finocchiaro, William FitzPatrick, Richard E. Flathman, Gvozden Flego, Richard Foley, Graeme Forbes, Rainer Forst, Malcolm R. Forster, Daniel Fouke, Patrick Francken, Samuel Freeman, Elizabeth Fricker, Miranda Fricker, Michael Friedman, Michael Fuerstein, Richard A. Fumerton, Alan Gabbey, Pieranna Garavaso, Daniel Garber, Jorge L. A. Garcia, Robert K. Garcia, Don Garrett, Philip Gasper, Gerald Gaus, Berys Gaut, Bernard Gert, Roger F. Gibson, Cody Gilmore, Carl Ginet, Alan H. Goldman, Alvin I. Goldman, Alfonso Gömez-Lobo, Lenn E. Goodman, Robert M. Gordon, Stefan Gosepath, Jorge J. E. Gracia, Daniel W. Graham, George A. Graham, Peter J. Graham, Richard E. Grandy, I. Grattan-Guinness, John Greco, Philip T. Grier, Nicholas Griffin, Nicholas Griffin, David A. Griffiths, Paul J. Griffiths, Stephen R. Grimm, Charles L. Griswold, Charles B. Guignon, Pete A. Y. Gunter, Dimitri Gutas, Gary Gutting, Paul Guyer, Kwame Gyekye, Oscar A. Haac, Raul Hakli, Raul Hakli, Michael Hallett, Edward C. Halper, Jean Hampton, R. James Hankinson, K. R. Hanley, Russell Hardin, Robert M. Harnish, William Harper, David Harrah, Kevin Hart, Ali Hasan, William Hasker, John Haugeland, Roger Hausheer, William Heald, Peter Heath, Richard Heck, John F. Heil, Vincent F. Hendricks, Stephen Hetherington, Francis Heylighen, Kathleen Marie Higgins, Risto Hilpinen, Harold T. Hodes, Joshua Hoffman, Alan Holland, Robert L. Holmes, Richard Holton, Brad W. Hooker, Terence E. Horgan, Tamara Horowitz, Paul Horwich, Vittorio Hösle, Paul Hoβfeld, Daniel Howard-Snyder, Frances Howard-Snyder, Anne Hudson, Deal W. Hudson, Carl A. Huffman, David L. Hull, Patricia Huntington, Thomas Hurka, Paul Hurley, Rosalind Hursthouse, Guillermo Hurtado, Ronald E. Hustwit, Sarah Hutton, Jonathan Jenkins Ichikawa, Harry A. Ide, David Ingram, Philip J. Ivanhoe, Alfred L. Ivry, Frank Jackson, Dale Jacquette, Joseph Jedwab, Richard Jeffrey, David Alan Johnson, Edward Johnson, Mark D. Jordan, Richard Joyce, Hwa Yol Jung, Robert Hillary Kane, Tomis Kapitan, Jacquelyn Ann K. Kegley, James A. Keller, Ralph Kennedy, Sergei Khoruzhii, Jaegwon Kim, Yersu Kim, Nathan L. King, Patricia Kitcher, Peter D. Klein, E. D. Klemke, Virginia Klenk, George L. Kline, Christian Klotz, Simo Knuuttila, Joseph J. Kockelmans, Konstantin Kolenda, Sebastian Tomasz Kołodziejczyk, Isaac Kramnick, Richard Kraut, Fred Kroon, Manfred Kuehn, Steven T. Kuhn, Henry E. Kyburg, John Lachs, Jennifer Lackey, Stephen E. Lahey, Andrea Lavazza, Thomas H. Leahey, Joo Heung Lee, Keith Lehrer, Dorothy Leland, Noah M. Lemos, Ernest LePore, Sarah-Jane Leslie, Isaac Levi, Andrew Levine, Alan E. Lewis, Daniel E. Little, Shu-hsien Liu, Shu-hsien Liu, Alan K. L. Chan, Brian Loar, Lawrence B. Lombard, John Longeway, Dominic McIver Lopes, Michael J. Loux, E. J. Lowe, Steven Luper, Eugene C. Luschei, William G. Lycan, David Lyons, David Macarthur, Danielle Macbeth, Scott MacDonald, Jacob L. Mackey, Louis H. Mackey, Penelope Mackie, Edward H. Madden, Penelope Maddy, G. B. Madison, Bernd Magnus, Pekka Mäkelä, Rudolf A. Makkreel, David Manley, William E. Mann (W.E.M.), Vladimir Marchenkov, Peter Markie, Jean-Pierre Marquis, Ausonio Marras, Mike W. Martin, A. P. Martinich, William L. McBride, David McCabe, Storrs McCall, Hugh J. McCann, Robert N. McCauley, John J. McDermott, Sarah McGrath, Ralph McInerny, Daniel J. McKaughan, Thomas McKay, Michael McKinsey, Brian P. McLaughlin, Ernan McMullin, Anthonie Meijers, Jack W. Meiland, William Jason Melanson, Alfred R. Mele, Joseph R. Mendola, Christopher Menzel, Michael J. Meyer, Christian B. Miller, David W. Miller, Peter Millican, Robert N. Minor, Phillip Mitsis, James A. Montmarquet, Michael S. Moore, Tim Moore, Benjamin Morison, Donald R. Morrison, Stephen J. Morse, Paul K. Moser, Alexander P. D. Mourelatos, Ian Mueller, James Bernard Murphy, Mark C. Murphy, Steven Nadler, Jan Narveson, Alan Nelson, Jerome Neu, Samuel Newlands, Kai Nielsen, Ilkka Niiniluoto, Carlos G. Noreña, Calvin G. Normore, David Fate Norton, Nikolaj Nottelmann, Donald Nute, David S. Oderberg, Steve Odin, Michael O'Rourke, Willard G. Oxtoby, Heinz Paetzold, George S. Pappas, Anthony J. Parel, Lydia Patton, R. P. Peerenboom, Francis Jeffry Pelletier, Adriaan T. Peperzak, Derk Pereboom, Jaroslav Peregrin, Glen Pettigrove, Philip Pettit, Edmund L. Pincoffs, Andrew Pinsent, Robert B. Pippin, Alvin Plantinga, Louis P. Pojman, Richard H. Popkin, John F. Post, Carl J. Posy, William J. Prior, Richard Purtill, Michael Quante, Philip L. Quinn, Philip L. Quinn, Elizabeth S. Radcliffe, Diana Raffman, Gerard Raulet, Stephen L. Read, Andrews Reath, Andrew Reisner, Nicholas Rescher, Henry S. Richardson, Robert C. Richardson, Thomas Ricketts, Wayne D. Riggs, Mark Roberts, Robert C. Roberts, Luke Robinson, Alexander Rosenberg, Gary Rosenkranz, Bernice Glatzer Rosenthal, Adina L. Roskies, William L. Rowe, T. M. Rudavsky, Michael Ruse, Bruce Russell, Lilly-Marlene Russow, Dan Ryder, R. M. Sainsbury, Joseph Salerno, Nathan Salmon, Wesley C. Salmon, Constantine Sandis, David H. Sanford, Marco Santambrogio, David Sapire, Ruth A. Saunders, Geoffrey Sayre-McCord, Charles Sayward, James P. Scanlan, Richard Schacht, Tamar Schapiro, Frederick F. Schmitt, Jerome B. Schneewind, Calvin O. Schrag, Alan D. Schrift, George F. Schumm, Jean-Loup Seban, David N. Sedley, Kenneth Seeskin, Krister Segerberg, Charlene Haddock Seigfried, Dennis M. Senchuk, James F. Sennett, William Lad Sessions, Stewart Shapiro, Tommie Shelby, Donald W. Sherburne, Christopher Shields, Roger A. Shiner, Sydney Shoemaker, Robert K. Shope, Kwong-loi Shun, Wilfried Sieg, A. John Simmons, Robert L. Simon, Marcus G. Singer, Georgette Sinkler, Walter Sinnott-Armstrong, Matti T. Sintonen, Lawrence Sklar, Brian Skyrms, Robert C. Sleigh, Michael Anthony Slote, Hans Sluga, Barry Smith, Michael Smith, Robin Smith, Robert Sokolowski, Robert C. Solomon, Marta Soniewicka, Philip Soper, Ernest Sosa, Nicholas Southwood, Paul Vincent Spade, T. L. S. Sprigge, Eric O. Springsted, George J. Stack, Rebecca Stangl, Jason Stanley, Florian Steinberger, Sören Stenlund, Christopher Stephens, James P. Sterba, Josef Stern, Matthias Steup, M. A. Stewart, Leopold Stubenberg, Edith Dudley Sulla, Frederick Suppe, Jere Paul Surber, David George Sussman, Sigrún Svavarsdóttir, Zeno G. Swijtink, Richard Swinburne, Charles C. Taliaferro, Robert B. Talisse, John Tasioulas, Paul Teller, Larry S. Temkin, Mark Textor, H. S. Thayer, Peter Thielke, Alan Thomas, Amie L. Thomasson, Katherine Thomson-Jones, Joshua C. Thurow, Vzalerie Tiberius, Terrence N. Tice, Paul Tidman, Mark C. Timmons, William Tolhurst, James E. Tomberlin, Rosemarie Tong, Lawrence Torcello, Kelly Trogdon, J. D. Trout, Robert E. Tully, Raimo Tuomela, John Turri, Martin M. Tweedale, Thomas Uebel, Jennifer Uleman, James Van Cleve, Harry van der Linden, Peter van Inwagen, Bryan W. Van Norden, René van Woudenberg, Donald Phillip Verene, Samantha Vice, Thomas Vinci, Donald Wayne Viney, Barbara Von Eckardt, Peter B. M. Vranas, Steven J. Wagner, William J. Wainwright, Paul E. Walker, Robert E. Wall, Craig Walton, Douglas Walton, Eric Watkins, Richard A. Watson, Michael V. Wedin, Rudolph H. Weingartner, Paul Weirich, Paul J. Weithman, Carl Wellman, Howard Wettstein, Samuel C. Wheeler, Stephen A. White, Jennifer Whiting, Edward R. Wierenga, Michael Williams, Fred Wilson, W. Kent Wilson, Kenneth P. Winkler, John F. Wippel, Jan Woleński, Allan B. Wolter, Nicholas P. Wolterstorff, Rega Wood, W. Jay Wood, Paul Woodruff, Alison Wylie, Gideon Yaffe, Takashi Yagisawa, Yutaka Yamamoto, Keith E. Yandell, Xiaomei Yang, Dean Zimmerman, Günter Zoller, Catherine Zuckert, Michael Zuckert, Jack A. Zupko (J.A.Z.) Edited by Robert Audi, University of Notre Dame, Indiana Book: The Cambridge Dictionary of Philosophy Published online: 05 August 2015 Print publication: 27 April 2015, pp ix-xxx 2013 multistate outbreaks of Cyclospora cayetanensis infections associated with fresh produce: focus on the Texas investigations F. ABANYIE, R. R. HARVEY, J. R. HARRIS, R. E. WIEGAND, L. GAUL, M. DESVIGNES-KENDRICK, K. IRVIN, I. WILLIAMS, R. L. HALL, B. HERWALDT, E. B. GRAY, Y. QVARNSTROM, M. E. WISE, V. CANTU, P. T. CANTEY, S. BOSCH, A. J. DA SILVA, A. FIELDS, H. BISHOP, A. WELLMAN, J. BEAL, N. WILSON, A. E. FIORE, R. TAUXE, S. LANCE, L. SLUTSKER, M. PARISE, the Multistate Cyclosporiasis Outbreak Investigation Team Journal: Epidemiology & Infection / Volume 143 / Issue 16 / December 2015 Published online by Cambridge University Press: 13 April 2015, pp. 3451-3458 The 2013 multistate outbreaks contributed to the largest annual number of reported US cases of cyclosporiasis since 1997. In this paper we focus on investigations in Texas. We defined an outbreak-associated case as laboratory-confirmed cyclosporiasis in a person with illness onset between 1 June and 31 August 2013, with no history of international travel in the previous 14 days. Epidemiological, environmental, and traceback investigations were conducted. Of the 631 cases reported in the multistate outbreaks, Texas reported the greatest number of cases, 270 (43%). More than 70 clusters were identified in Texas, four of which were further investigated. One restaurant-associated cluster of 25 case-patients was selected for a case-control study. Consumption of cilantro was most strongly associated with illness on meal date-matched analysis (matched odds ratio 19·8, 95% confidence interval 4·0–∞). All case-patients in the other three clusters investigated also ate cilantro. Traceback investigations converged on three suppliers in Puebla, Mexico. Cilantro was the vehicle of infection in the four clusters investigated; the temporal association of these clusters with the large overall increase in cyclosporiasis cases in Texas suggests cilantro was the vehicle of infection for many other cases. However, the paucity of epidemiological and traceback information does not allow for a conclusive determination; moreover, molecular epidemiological tools for cyclosporiasis that could provide more definitive linkage between case clusters are needed.
CommonCrawl
\begin{document} \title{The Banach manifold $C^k(M,N)$} \author{Johannes Wittmann} \maketitle \begin{abstract} Let $M$ be a compact manifold without boundary and let $N$ be a connected manifold without boundary. For each $k\in\mathbb{N}$ the set of $k$ times continuously differentiable maps between $M$ and $N$ has the structure of a smooth Banach manifold where the underlying manifold topology is the compact-open $C^k$ topology. We provide a detailed and rigorous proof for this important statement which is already partially covered by existing literature. \end{abstract} \tableofcontents \section{Introduction} Let $M$ be a closed manifold\footnote{By ``manifold'' we always mean a finite-dimensional manifold with or without boundary. All manifolds we consider are non-empty, second-countable, and Hausdorff. All manifolds considered are smooth (= $C^\infty$), unless otherwise specified. A closed manifold is a compact manifold without boundary. Moreover, in the following we use ``vector space'' for vector spaces over $\mathbb{R}$.} and let $N$ be a connected manifold without boundary. For each $k\in\mathbb{N}:=\{0,1,2,\ldots\}$ we denote by $C^k(M,N)$ the set of $k$ times continuously differentiable maps between $M$ and $N$. It is well known that for each $k\in\mathbb{N}$ the set $C^k(M,N)$ has the structure of a smooth Banach manifold. The natural idea to turn $C^k(M,N)$ into a Banach manifold is to choose a Riemannian metric on $N$ and then use the exponential map of $N$ to construct the charts of $C^k(M,N)$. More precisely, for $g$ close enough to $f$, the map \[C^k(M,N)\ni g\mapsto (p\mapsto (\textup{exp}_{f(p)})^{-1}g(p))\in \Gamma_{C^k}(f^*TN),\] is a chart around $f$. Here, $\textup{exp}$ denotes the exponential map of the Riemannian manifold $N$. This idea can be found in many places in the literature (references are given below). Let us denote this chart by $\varphi_f$. Driven by applications, there are several natural requirements and questions: One needs a rigorous and detailed proof that these charts induce a smooth structure. Are the transition maps $\varphi_f\circ (\varphi_g)^{-1}$ only smooth for $f,g\in C^\infty(M,N)$ or are they also smooth in the case that $f$ and $g$ are precisely $k$ times continuously differentiable? Is the manifold topology of $C^k(M,N)$ the compact-open $C^k$ topology? An investigation of literature regarding such questions only brought up partial answers and proofs \cite{Eells1, Blue, Eells, Eliasson, Palais, Michor, Hamilton, MTAA, KM}. We explain this in more detail at the end of this section. Note that the case $k=\infty$ is better dealt with in the literature, in particular a very thorough treatment of the space $C^\infty(M,N)$ can be found in \cite{KM}. In this paper we provide a detailed proof for the following theorem. \begin{main*}[c.f. Theorem \ref{theorem ckmn banach mf}]Let $M$ be a closed manifold and let $N$ be a connected manifold without boundary. Let $k\in \mathbb{N}$ and fix a Riemannian metric on $N$. Then the set $C^k(M,N)$ endowed with the compact-open $C^k$ topology has the structure of a smooth Banach manifold with the following property: for any $f\in C^k(M,N)$ and any small enough open neighborhood $U_f$ of $f$ in $C^k(M,N)$ there is an open neighborhood $V_f$ of the zero section in $\Gamma_{C^k}(f^*TN)$ such that the map \begin{align*} \varphi_f\colon U_f&\to V_f,\\ g&\mapsto \textup{exp}^{-1}\circ (f,g), \end{align*} i.e., $\varphi_f(g)(p)=(\textup{exp}_{f(p)})^{-1}g(p)$ for all $g\in U_f$, $p\in M$, is a local chart. Here, we endow the space $\Gamma_{C^k}(f^*TN)$ of $C^k$-sections of $f^*TN$ with the usual $C^k$-norm. Note that the inverse of $\varphi_f$ is given by $$\varphi_f^{-1}(s)(p)=\textup{exp}_{f(p)}s(p)$$ for all $s\in V_f$, $p\in M$. Moreover, this smooth structure on $C^k(M,N)$ does not depend on the choice of Riemannian metric on $N$. \end{main*} Our detailed treatment of the proof of the Main Theorem might also be helpful for treating mapping groupoids of $C^k$-maps \cite{gr1, gr2}. The basic strategy to prove the Main Theorem is as follows. We first show that the maps $\varphi_f\colon U_f\to V_f$ are homeomorphisms. Then we argue why the transition maps $\varphi_f\circ \varphi_g^{-1}$ given by \[(\varphi_f\circ \varphi_g^{-1})(s)=\left((\textup{exp}_{f})^{-1}\circ\textup{exp}_{g}\right)s \] are smooth provided that $U_f\cap U_g\neq\varnothing$. For this our arguments are inspired by \cite{Blue}. The smoothness of the transition maps is the most delicate part, and one has to argue very carefully, since $\varphi_f$ and $\varphi_g$ are defined using not necessarily smooth maps $f$ and $g$. The main input for this will be the $\Omega$-lemma (using the terminology of \cite{Blue,MTAA}) which we will first prove in a ``local'' version, see Lemma \ref{lemma loc omega}, and then ``globalize'' to maps between sections of vector bundles, see Lemma \ref{lemma glob omega}. In \cite{Eells} the idea how the charts of $C^k(M,N)$ are constructed is outlined, it is however not included how to show that the charts are homeomorphisms or how to show smoothness of the transition maps. In \cite{Eells1} one finds details for the case $k=0$, i.e., $C^0(M,N)$, but not for general $k\in\mathbb{N}$. The notes \cite{Blue} contain details regarding the proof of the smoothness of the transition maps, however, the question whether the topology of $C^k(M,N)$ is the compact-open $C^k$ topology is not treated. \section{Preliminaries and the local $\Omega$-lemma} We begin by recalling some basic definitions regarding the notion of differentiability of maps between normed vector spaces that we use in the following, see e.g. \cite{MTAA}. Let $(X,\|.\|_X)$ and $(Y,\|.\|_Y)$ be normed vector spaces, $U\subset X$ open, and $f\colon U\rightarrow Y$ a map. We say that $f$ is \textit{differentiable at} $x_0\in U$ if there exists a continuous linear map $Df(x_0):=Df_{x_0}\colon X\rightarrow Y$ s.t. for every $\varepsilon >0$, there exists $\delta=\delta(\varepsilon)>0$ s.t. whenever $0<\|x-x_0\|_X<\delta$, we have \[\frac{\|f(x)-f(x_0)-Df_{x_0}(x-x_0)\|_Y}{\|x-x_0\|_X}<\varepsilon.\] Moreover, the map $f$ is \textit{differentiable} if $f$ is differentiable at every $x_0\in U$. We say that $f$ is \textit{continuously differentiable} if $f$ is differentiable and the map \[Df\colon U\rightarrow L(X,Y),\qquad x\mapsto Df_x,\] is continuous. Here, $L(X,Y)$ denotes the space of continuous linear maps $X\to Y$. Similarly, $L^k(X,Y)$ denotes the space of $k$-multilinear continuous maps $\underbrace{X\times\ldots\times X}_{k\text{ times}}\to Y$. We endow $L^k(X,Y)$ with the norm \[\|f\|:=\sup\left\{\frac{ \|f(x_1,\ldots,x_k)\|_Y}{\|x_1\|_X\cdot\ldots\cdot \|x_k\|_X}\mid x_1,\ldots,x_k\in X\setminus\{0\}\right\}.\] Then $L^k(X,Y)$ is a Banach space if $Y$ is a Banach space. Finally, we denote by $L^k_s(X,Y)\subset L^k(X,Y)$ the symmetric elements of $L^k(X,Y)$. Inductively, we define \[D^kf:=D(D^{k-1}f)\colon U\rightarrow L^k(X,Y)\] if it exists, where we have identified $L(X,L^{k-1}(X,Y))$ with $L^k(X,Y)$ via the norm-preserving isomorphism \[L(X,L^{k-1}(X,Y))\ni f\mapsto \bigg((x_1,\ldots,x_k)\to f(x_1)(x_2,\ldots,x_k)\bigg)\in L^k(X,Y) .\] If $D^kf$ exists and is continuous, we say that $f$ is \textit{$k$ times continuously differentiable} (or \textit{$f$ is a $C^k$-map}). We use the notation $$C^k(U,Y):=\{f\colon U\to Y\text{ }| \text{ } f \text{ is } k \text{ times continuously differentiable}\}.$$ Note that if $f\in C^k(U,Y)$, then $D^kf(x)\in L^k_s(X,Y)$ for all $x\in U$. In the following the special case $X=\mathbb{R}^n$ will also be important. Then a map $f\colon U \to Y$ (where $U\subset\mathbb{R}^n$ is open and $(Y,\|.\|_Y)$ be a normed vector space) is continuously differentiable iff for all $j=1,\ldots,n$ and all $x_0\in U$ the limit \[\left(\partial_{x_j}f\right)(x_0):=\lim_{h\to 0}\frac{1}{h}\left(f(x_0+he_j)-f(x_0)\right)\] exists in $Y$ and the maps $\partial_{x_j}f\colon U\rightarrow Y$ are continuous. Let $k\in \mathbb{N}_{>0}$. Then $f$ is $k$ times continuously differentiable iff for all $j=1,\ldots,n$ the map $\partial_{x_j}f\colon U\rightarrow Y$ is continuous for $k=1$, respectively $(k-1)$ times continuously differentiable for $k\ge 2$. We define \begin{align*} C^k(\overline{U},Y):=\{ f\in C^k(U,Y)\text{ }|\text{ } \partial^\alpha_xf \text{ has a continuous extension to } \overline{U} \text{ for all }|\alpha|\le k\}, \end{align*} where $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb{N}^n$ is a multiindex, $\partial^\alpha_x=\partial^{\alpha_1}_{x_1}\ldots \partial^{\alpha_n}_{x_n}$, and $|\alpha|=\alpha_1+\ldots+\alpha_n$. If $U\subset \mathbb{R}^n$ is open and bounded, we define \[\|f\|_{C^k(\overline{U},Y)}:=\max_{|\alpha|\le k}\sup_{x\in\overline{U}}\|\partial^\alpha_xf(x)\|_Y\] for all $f\in C^k(\overline{U},Y)$. If $(Y,\|.\|_Y)$ is a Banach space, then $(C^k(\overline{U},Y),\|.\|_{C^k(\overline{U},Y)})$ is a Banach space. The following technical lemma will be helpful to show e.g. that the maps that will later be the charts of $C^k(M,N)$ are homeomorphisms. \begin{lemma}\label{lemma Absch Ck norm verknuepfung} Let $U_1\subset\mathbb{R}^n$ and $W\subset\mathbb{R}^m$ be open. Let $K\subset U_1$ and $\tilde{K}\subset W$ be compact. Let $\Psi\colon W\rightarrow \mathbb{R}^l$ be a $C^k$-map and $R>0$. Let $f_1\in C^k(U_1,\mathbb{R}^m)$ with $f_1(K)\subset\tilde{K}$ and $f_1(U_1)\subset W$. Then there exists $C=C(\Psi, K,\tilde{K},R,f_1)>0$ s.t. \begin{align*} &\max_{|\alpha|\le k}\sup_{x\in K}\|\partial^\alpha_x(\Psi\circ f_1)(x)-\partial^\alpha_x(\Psi\circ f_2)(x)\|\\ &\le C\max_{|\alpha|\le k}\sup_{x\in K}\|\partial^\alpha_xf_1(x)-\partial^\alpha_xf_2(x)\| \end{align*} for all $f_2\in C^k(U_2,\mathbb{R}^m)$ with $f_2(K)\subset\tilde{K}$, $f_2(U_2)\subset W$, and \begin{align}\label{eq qqq} \max_{|\alpha|\le k}\sup_{x\in K}\|\partial^\alpha_xf_1(x)-\partial^\alpha_x f_2(x)\|\le R. \end{align} Moreover, $C(\Psi, K,\tilde{K},R,f_1)$ can be chosen s.t. $R\mapsto C(\Psi, K,\tilde{K},R,f_1)$ is non-decreasing. \end{lemma} \begin{proof}[Sketch of proof.] The assertion of the lemma can be shown by mathematical induction over $k$, applying the chain rule, and adding zeros. We want to illustrate the idea in the case $k=n=m=l=1$ by the following exemplary calculation: \begin{align*} \partial_x (\Psi\circ f_1)-\partial_x (\Psi\circ f_2)&=(\partial_x\Psi)\circ f_1 \cdot \partial_xf_1-(\partial_x\Psi)\circ f_2 \cdot \partial_xf_2\\ &=(\partial_x \Psi)\circ f_1 \cdot \big(\partial_x f_1-\partial_x f_2\big) \\ &\hphantom{=}+ \big((\partial_x\Psi)\circ f_1 - (\partial_x\Psi)\circ f_2\big)\cdot \partial_x f_2. \end{align*} Now we can deal with the terms on the right hand side of the above equation by using the induction hypothesis and \eqref{eq qqq}. For higher differentiability orders and space dimensions, the calculations get more technical, but the idea stays the same. For example, in the case $k=2$ (and $n=m=l=1$) we apply the chain rule to get \[\partial_x^2(\Psi\circ f_i)= (\partial_x^2\Psi)\circ f_i \cdot (\partial_x f_i)^2 + (\partial_x\Psi)\circ f_i \cdot \partial_x^2f_i.\] Using this equation and adding zero, we have \begin{align*} \partial_x^2(\Psi\circ f_1)-\partial_x^2(\Psi\circ f_2)=&\big((\partial_x^2\Psi)\circ f_1-(\partial_x^2\Psi)\circ f_2 \big)\cdot (\partial_xf_2)^2\\ &+(\partial_x^2\Psi)\circ f_1 \cdot \big((\partial_x f_1)^2-(\partial_xf_2)^2\big)\\ &+\big((\partial_x\Psi)\circ f_1 -(\partial_x\Psi)\circ f_2 \big)\cdot \partial_x^2f_1\\ &+ (\partial_x\Psi)\circ f_2 \cdot \big(\partial_x^2f_1-\partial_x^2f_2\big). \end{align*} Again, we now use the induction hypothesis and \eqref{eq qqq}. (The term $(\partial_x f_1)^2-(\partial_xf_2)^2$ can be dealt with by the binomial formula and \eqref{eq qqq}.) \end{proof} The goal for the remainder of this section is to state and prove the so-called (local) $\Omega$-lemma. As stated in the introduction, this lemma is the key to show that $C^k(M,N)$ carries a smooth structure. To that end, we recall the following version of Taylor's theorem. Suppose that $X$ is a Banach space and that $U\subset X$ is an open convex subset. An open subset $\tilde{U}\subset X\times X$ is \textit{a thickening of $U$} if \begin{enumerate} \item $U\times \{0\}\subset \tilde{U}$, \item $u+th\in U$ for all $(u,h)\in \tilde{U}$ and $0\le t\le 1$, \item $(u,h)\in\tilde{U}$ implies $u\in U$. \end{enumerate} Note that there always exists a thickening of $U$. \begin{lemma}[Taylor's theorem]\label{theorem taylor} Let $X$ and $Y$ be Banach spaces, $U\subset X$ open and convex, $\tilde{U}$ a thickening of $U$. A map $f\colon U\rightarrow Y$ is $r$ times continuously differentiable if and only if there are continuous maps \[\varphi_i\colon U\rightarrow L^i_s(X,Y), \hspace{3em} i=1,\ldots r,\] and \[R\colon\tilde{U}\rightarrow L^r_s(X,Y),\] s.t. for all $(u,h)\in \tilde{U}$, \[f(u+h)=f(u)+\left(\sum_{i=1}^r\frac{\varphi_i(u)}{i!}h^i\right)+R(u,h)h^r\] where $h^i=(h,\ldots,h)$ ($i$ times) and $R(u,0)=0$. If $f$ is $r$ times continuously differentiable, then necessarily $\varphi_i=D^if$ for all $i=1,\ldots, r$ and in addition \[R(u,h)=\int_{0}^{1}\frac{(1-t)^{r-1}}{(r-1)!}\left(D^rf(u+th)-D^rf(u) \right)dt\] \end{lemma} A proof can be found in e.g. \cite[2.4.15 Theorem]{MTAA}. \begin{lemma}[local $\Omega$-lemma]\label{lemma loc omega}Let $r,l\in\mathbb{N}$. Let $U\subset\mathbb{R}^n$ be open and bounded and let $V\subset\mathbb{R}^m$ be open, bounded, and convex. Moreover, let $Y$ be a Banach space and \[g\colon U\times V\rightarrow Y\] a map s.t. \begin{enumerate} \item $g\in C^r(\overline{U\times V},Y).$ \item For each $i\in\{0,\ldots,l\}$ the map \[D^i_2g\colon U\times V\rightarrow L^i_s(\mathbb{R}^m,Y),\] defined by $(D^i_2g)(x,y):=(D^i(g(x,.))(y)$ for all $(x,y)\in U\times V$ exists and is an element of $C^r(\overline{U\times V},L^i_s(\mathbb{R}^m,Y)).$ \end{enumerate} Then the map \begin{align*} \Omega_g\colon C^r(\overline{U},V)&\rightarrow C^r(\overline{U},Y)\\ f&\mapsto (x\mapsto g(x,f(x))) \end{align*} is an element of $C^l(C^r(\overline{U},V),C^r(\overline{U},Y))$. Here, \[C^r(\overline{U},V):=\{f\in C^r(\overline{U},\mathbb{R}^m)\text{ }|\text{ } f(\overline{U})\subset V\}\] and $C^r(\overline{U},V) \subset C^r(\overline{U},\mathbb{R}^m)$ is open. Moreover, if $l>0$, it holds that \begin{align}\label{eq4} D^i\left(\Omega_g\right)=A_i\circ \Omega_{D^i_2g} \end{align} for each $i=1,\ldots,l$, where $A_i$ is the continuous map \[A_i\colon C^r(\overline{U},L^i_s(\mathbb{R}^m,Y))\rightarrow L^i_s(C^r(\overline{U},\mathbb{R}^m),C^r(\overline{U},Y))\] defined by \[\left(\left(A_i(H)\right)(h_1,\ldots,h_i)\right)(x):=(H(x))(h_1(x),\ldots,h_i(x))\] \end{lemma} The statement of Lemma \ref{lemma loc omega} can be found in different versions in \cite{Blue,MTAA, Gl}. We want to humbly point out that it is possible that \cite[3.6 Theorem]{Blue} only holds in special cases. This theorem is tied to the assumptions of the version of the local $\Omega$-lemma \cite[3.7 Theorem]{Blue}. Therefore it is possible that the assumptions of the local $\Omega$-lemma in \cite{Blue} are not ideal. Our proof is an adapted version of \cite[Proof of 2.4.18 Proposition]{MTAA}. \begin{proof}[Proof of Lemma \ref{lemma loc omega}] First we prove that $C^r(\overline{U},V) \subset C^r(\overline{U},\mathbb{R}^m)$ is open. Choose $f_0\in C^r(\overline{U},V)$. Since $f_0(\overline{U})$ is compact, $\mathbb{R}^m\setminus V$ is closed, and $f_0(\overline{U})\cap(\mathbb{R}^m\setminus V)=\varnothing$, we have \[\varepsilon :=\textup{dist}(f_0(\overline{U}),\mathbb{R}^m\setminus V)>0.\] Now assume that $\|f-f_0\|_{C^r(\overline{U},\mathbb{R}^m)}<\varepsilon$. It follows that $\|f(x)-f_0(x)\|_Y<\varepsilon$ for all $x\in\overline{U}$. By definition of $\varepsilon$, this means $f(\overline{U})\subset V$ and so $C^r(\overline{U},V) \subset C^r(\overline{U},\mathbb{R}^m)$ is open. In the case ``$l=0$, $r\in\mathbb{N}$'' the assertion of the lemma follows from a computation. Assume $l\in\mathbb{N}_{>0}$ and $r\in\mathbb{N}$. Let $\tilde{V}\subset \mathbb{R}^m\times\mathbb{R}^m$ be a thickening of $V$. From applying Lemma \ref{theorem taylor} to $g(x,.)$ (for $x$ fixed) it follows that for all $(y_1,y_2)\in \tilde{V}$ and all $x\in U$ we have \begin{align}\label{eq3} g(x,y_1+y_2)=g(x,y_1)+\left(\sum_{i=1}^{l}\frac{1}{i!}(D^i_2g)(x,y_1)y_2^i\right)+R(x,y_1,y_2)y_2^l \end{align} where the map \[R\colon U\times\tilde{V} \rightarrow L^l_s(\mathbb{R}^m,Y)\] is given by \[R(x,y_1,y_2)=\int_{0}^{1}\frac{(1-t)^{l-1}}{(l-1)!}\left(D^l_2g(x,y_1+ty_2)-D^l_2g(x,y_1) \right)dt.\] Define \[F(t,x,y_1,y_2):=\frac{(1-t)^{l-1}}{(l-1)!}\left(D^l_2g(x,y_1+ty_2)-D^l_2g(x,y_1)\right).\] From ii) it follows that \[F\in C^r(\overline{(0,1)\times U\times\tilde{V}}, L^l_s(\mathbb{R}^m,Y)).\] By differentiating under the integral it follows that \[R\in C^r(\overline{U\times\tilde{V}},L^l_s(\mathbb{R}^m,Y)).\] Since we already proved the case ``$l=0$, $r\in\mathbb{N}$'' we see that \begin{align*} \Omega_R\colon C^r(\overline{U},\tilde{V})&\rightarrow C^r(\overline{U},L^l_s(\mathbb{R}^m,Y)),\\ h&\mapsto (x\mapsto R(x,h(x))), \end{align*} is continuous. In particular, \[\tilde{R}:=A_l\circ\Omega_R\colon C^r(\overline{U},\tilde{V})\rightarrow L^l_s(C^r(\overline{U},\mathbb{R}^m),C^r(\overline{U},Y))\] is continuous. Analogously, we see that \[\widetilde{\Omega_{D^i_2g}}:=A_i\circ\Omega_{D^i_2g}\colon C^r(\overline{U},V)\rightarrow L^i_s(C^r(\overline{U},\mathbb{R}^m),C^r(\overline{U},Y))\] is continuous for $i=1,\ldots,l$. From $\eqref{eq3}$ it follows that for all $(f,h)\in C^r(\overline{U},\tilde{V})$ we have \[\Omega_g(f+h)=\Omega_g(f)+\left(\sum_{i=1}^{l} \frac{1}{i!}\widetilde{\Omega_{D^i_2g}}(f)h^i\right)+\tilde{R}(f,h)h^l.\] From Lemma \ref{theorem taylor} we conclude that $\Omega_g\in C^l(C^r(\overline{U},V),C^r(\overline{U},Y))$ and \[D^i\left(\Omega_g\right)=\widetilde{\Omega_{D^i_2g}}=A_i\circ \Omega_{D^i_2g}\] for $i=1,\ldots,l$. (Here we used that $C^r(\overline{U},\tilde{V})$, viewed as a subset of $C^r(\overline{U},\mathbb{R}^m)\times C^r(\overline{U},\mathbb{R}^m)$, is a thickening of $C^r(\overline{U},V)$.) \end{proof} \section{The topological space $C^k(M,N)$} In this section we recall the definitions of the compact-open $C^k$ topology on $C^k(M,N)$ and the $C^k$-norm on sections of vector bundles. We try to be as precise as possible when stating these definitions, so that no confusion arises when we use them later in technical proofs. Then we show that the maps which will be the charts of $C^k(M,N)$ are homeomorphisms. The following definition is taken from \cite[Chapter 2]{Hir}. \begin{definition}[compact-open $C^k$ topology] Let $M$ and $N$ be manifolds without boundary and $k\in\mathbb{N}$. For $f\in C^k(M,N)$, charts $(\varphi,U)$ and $(\psi,V)$ of $M$ and $N$, respectively, $K\subset U$ compact with $f(K)\subset V$ and $\varepsilon >0$ we define the set \begin{align*} \mathcal{N}^k(f,\varphi,U,\psi&,V,K,\varepsilon):=\{g\in C^k(M,N)\text{ }|\text{ }g(K)\subset V\text{, }\\ &\max_{|\alpha|\le k}\sup_{x\in\varphi(K)}\|\partial^\alpha_x(\psi\circ g\circ \varphi^{-1})(x)-\partial^\alpha_x(\psi\circ f\circ \varphi^{-1})(x)\|< \varepsilon\} \end{align*} where $\|.\|$ denotes the Euclidean norm. The \textit{compact-open $C^k$ topology} (or \textit{weak topology}) \textit{on $C^k(M,N)$} is the topology generated by the set \begin{align*} \{\mathcal{N}^k(f,\varphi,U,\psi,V,K,\varepsilon)\text{ }|\text{ }f\in C^k(M,N),\text{ } (\varphi,U) \text{ and }(\psi,V) \text{ charts of } M \text{ and } N,\\ \text{ respectively, } K\subset U \text{ compact with } f(K)\subset V \text{, } \varepsilon >0\} \end{align*} as a subbasis. \end{definition} From now on, we always assume $C^k(M,N)$ to be equipped with the compact-open $C^k$ topology. The topological space $C^k(M,N)$ is secound-countable and metrizable \cite[p. 35]{Hir}. In particular, it is Hausdorff. We will use the following lemma later. \begin{lemma}\label{lemma weak top properties} Assume $M$ is closed. Let $f\in C^k(M,N)$, $k\in\mathbb{N}$, $(\varphi_i,U_i)$ and $(\psi_i,V_i)$ charts of $M$ and $N$ respectively, $K_i\subset U_i$ compact with $f(K_i)\subset V_i$, $i=1,\ldots r$, and $\bigcup_{i=1}^rK_i=M$. Then the set \begin{align*} \{\bigcap_{i=1}^r\mathcal{N}^k(f,\varphi_i,U_i,\psi_i,V_i,K_i,\varepsilon)\text{ }|\text{ }\varepsilon >0\} \end{align*} is a neighborhood basis of $f$. In particular, a sequence $(f_m)_{m\in\mathbb{N}}\subset C^k(M,N)$ converges to $f$ in $C^k(M,N)$ iff for all $\varepsilon>0$ there exists some $N=N(\varepsilon)$ s.t. for all $m\ge N(\varepsilon)$ it holds that $f_m\in \bigcap_{i=1}^r\mathcal{N}^k(f,\varphi_i,U_i,\psi_i,V_i,K_i,\varepsilon)$. \end{lemma} \begin{proof} We have to show the following: If an arbitrary $\mathcal{N}^k(f,\varphi,U,\psi,V,K,\varepsilon)$ is given, then there exists some $\delta>0$ s.t. \[\bigcap_{i=1}^r\mathcal{N}^k(f,\varphi_i,U_i,\psi_i,V_i,K_i,\delta)\subset\mathcal{N}^k(f,\varphi,U,\psi,V,K,\varepsilon).\] To that end, assume that $K_i\cap K\neq\varnothing$. Since the complement $\psi_i(V_i\cap V)^\complement$ is closed, $\psi_i(f(K_i\cap K))$ is compact, and $\psi_i(V_i\cap V)^\complement\cap\psi_i(f(K_i\cap K))=\varnothing$ we have \[\delta_i:=\mathrm{dist}(\psi_i(V_i\cap V)^\complement,\psi_i(f(K_i\cap K)))>0.\] Now choose an arbitrary $\delta$ with \[0<\delta\le\frac{1}{2}\min\{\delta_i\text{ }|\text{ } i\in\{1,\ldots,r\} \text{ and } K_i\cap K\neq \varnothing\}\] and let \[g\in\bigcap_{i=1}^r\mathcal{N}^k(f,\varphi_i,U_i,\psi_i,V_i,K_i,\delta).\] We show $g(K)\subset V$. Since $g(K_i)\subset V_i$ and because the $K_i$ cover $M$, it is sufficient to show $g(K_i\cap K)\subset V_i\cap V$ whenever $K_i\cap K\neq \varnothing$. To that end, assume $K_i\cap K\neq \varnothing$. From $g\in \mathcal{N}^k(f,\varphi_i,U_i,\psi_i,V_i,K_i,\delta)$ it follows that \[\max_{|\alpha|\le k}\sup_{x\in\varphi_i(K_i\cap K)}\|\partial^\alpha_x(\psi_i\circ g\circ \varphi^{-1})(x)-\partial^\alpha_x(\psi_i\circ f \circ \varphi^{-1})(x)\|<\delta.\] In particular, that means that for each $p\in K_i\cap K$ we have $\psi_i(g(p))\in B_{\delta}(\psi_i(f(p)))$. From the definition of $\delta$ it follows that for all $p\in K_i\cap K$ we have $B_{\delta}(\psi_i(f(p)))\subset\psi_i(V_i\cap V)$. It follows that $\psi_i(g(K_i\cap K))\subset \psi_i(V_i\cap V)$ and thus $g(K_i\cap K))\subset V_i\cap V$. We have shown $g(K)\subset V$. Using Lemma \ref{lemma Absch Ck norm verknuepfung}\footnote{For $f_1=\psi_i\circ f \circ \varphi_i^{-1}$ defined on $\varphi_i(U_i\cap U\cap f^{-1}(V_i\cap V))$, $f_2=\psi_i\circ g\circ\varphi_i^{-1}$ defined on $\varphi_i(U_i\cap U\cap g^{-1}(V_i\cap V))$, $\Psi=\psi\circ \psi_i^{-1}$ defined on $\psi_i(V_i\cap V)$, and $\tilde{K}=\overline{ B_\delta(\psi_i(f(K_i\cap K)))}\subset \psi_i(V_i\cap V)$.} (and a version of Lemma \ref{lemma Absch Ck norm verknuepfung} that estimates pre-composition with diffeomorphisms rather than post-composition with maps, for details see \cite[Lemma 3.2.1 i)]{JWDissertation}) we calculate \begin{align*} &\max_{|\alpha|\le k}\sup_{x\in\varphi(K)}\|\partial^\alpha_x(\psi\circ g\circ \varphi^{-1})(x)-\partial^\alpha_x(\psi\circ f\circ \varphi^{-1})(x)\|\\ &=\max_{i=1,\ldots,l}\max_{|\alpha|\le k}\sup_{x\in\varphi(K_i\cap K)}\|\partial^\alpha_x(\psi\circ g\circ \varphi^{-1})(x)-\partial^\alpha_x(\psi\circ f\circ \varphi^{-1})(x)\|\\ &=\max_{i=1,\ldots,l}\max_{|\alpha|\le k}\sup_{x\in\varphi(K_i\cap K)}\|\partial^\alpha_x(\psi\circ \psi_i^{-1}\circ \psi_i\circ g\circ \varphi_i^{-1}\circ\varphi_i\circ \varphi^{-1})(x)\\ &\hspace{3cm}-\partial^\alpha_x(\psi\circ\psi_i^{-1}\circ \psi_i\circ f\circ\varphi_i^{-1}\circ\varphi_i\circ \varphi^{-1})(x)\|\\ &\le\max_{i=1,\ldots,l}\left(C_i\max_{|\alpha|\le k}\sup_{x\in\varphi_i(K_i\cap K)}\|\partial^\alpha_x(\psi_i\circ g\circ \varphi_i^{-1})(x)-\partial^\alpha_x(\psi_i\circ f\circ \varphi_i^{-1})(x)\|\right)\\ &\le \left(\max_{i=1,\ldots,l}C_i\right)\delta. \end{align*} Now we choose $\delta$ so small that $\left(\max_{i=1,\ldots,l}C_i\right)\delta<\varepsilon$. This finishes the proof. \end{proof} \begin{definition}[$C^k$-norm on sections of a vector bundle]\label{def C^k norm vrb} Let $M$ be a closed manifold. Let $\pi\colon E\to M$ be a $C^k$ vector bundle. Pick charts $(U_i,\varphi_i)$ of $M$, $i=1,\ldots,l$, $\bigcup_{i=1}^lU_i=M$ s.t. $\overline{U_i}\subset M$ is compact, $\overline{U_i}\subset \tilde{U_i}$, $(\tilde{U_i},\varphi_i)$ is still a chart of $M$ and there are local trivializations $(\hat{U}_i,\Phi_i)$ of $E$ with $\overline{U_i}\subset\hat{U}_i$ for each $i=1,\ldots,l$. For $k\in\mathbb{N}$ let \[\Gamma_{C^k}(E):=\{s\colon M \to E\text{ }|\text{ } s\in C^k(M,E) \text{ and } \pi\circ s=id_M \}\] be the space of $C^k$-sections of $E$. Define the \textit{$C^k$-norm on $\Gamma_{C^k}(E)$} by \[\|s\|_{C^k}:=\|s\|_{\Gamma_{C^k}(E)}:=\max_{i=1,\ldots, l}\max_{|\alpha|\le k} \sup_{x\in\overline{\varphi_i(U_i)}}\|\partial^\alpha_x(pr_2\circ\Phi_i\circ s\circ \varphi_i^{-1})\|\] for $s\in\Gamma_{C^k}(E)$. Note that $(\Gamma_{C^k}(E),\|.\|_{C^k})$ is a Banach space. Up to equivalence of norms, $\|.\|_{C^k}$ does not depend on the choices made in its definition. \end{definition} For the definition of the charts of $C^k(M,N)$ the exponential map of $N$ is the main input. For the convenience of the reader and to fix notation we recall some basic facts about the exponential map of a Riemannian manifold. In the following we denote the bundle projection of $TN$ by $\pi_{TN}\colon TN\to N$. \begin{lemma}\label{lemma eig exp}Let $(N,h)$ be a Riemannian manifold. Define $\mathcal{E}\subset TN$ by \[\mathcal{E}:=\{v\in TN\text{ }|\text{ } \textup{exp}_{\pi_{TN}(v)}v \text{ exists}\}.\] \begin{enumerate} \item $\mathcal{E}\subset TN$ is open and \[\textup{exp}\colon \mathcal{E}\rightarrow N\] defined by $\textup{exp}(v):=\textup{exp}_{\pi_{TN}(v)}v$ is smooth. \item Define the smooth map \[E:=(\pi_{TN},\textup{exp})\colon \mathcal{E}\rightarrow N\times N\] by $E(v):=(\pi_{TN}(v),\textup{exp}_{\pi_{TN}(v)}v)$. For each $p\in N$ there exists a neighborhood $W$ of $0_p$ (where $0_p$ denotes the zero-element in $T_pN$) in $TN$ s.t. the map \[E\colon W\rightarrow E(W)\] is a diffeomorphism (in particular $E(W)$ is open in $N\times N$). \item For all $p\in N$ and $0<\delta<\textup{inj}_p(N)$ where $\textup{inj}_p(N)>0$ is the injectivity radius of $N$ at $p$ it holds that \[\textup{exp}_p\colon B_{\delta}(0_p)\rightarrow B_\delta(p)\] is a diffeomorphism where $B_{\delta}(0_p)=\{v\in T_pM\text{ }|\text{ }\|v\|_h:=\delta \}$, $B_\delta(p)=\{q\in N\text{ }|\text{ }d(p,q)<\delta\}$, and $d$ is the distance function induced by $h$. \end{enumerate} \end{lemma} Now we define the maps that will later be the charts of $C^k(M,N)$ and show that they are homeomorphisms. \begin{lemma}\label{lemma charts c^kMN are homeo}Let $k\in\mathbb{N}$. Let $M$ and $N$ be manifolds without boundary. Let $M$ be compact and let $N$ be connected. Choose a Riemannian metric $h$ on $N$. Define \[U_{f,\varepsilon}:=\bigcap_{i=1}^l\mathcal{N}^k(f,\varphi_i,\tilde{U}_i,\psi_i,V_i,\overline{U_i},\varepsilon)\] for $(U_i,\varphi_i)$ charts of $M$, $i=1,\ldots,l$, $\bigcup_{i=1}^lU_i=M$, s.t. $\overline{U_i}\subset M$ is compact, $\overline{U_i}\subset\tilde{U}_i$, $(\tilde{U}_i,\varphi_i)$ is still a chart of $M$ and charts $(V_i,\psi_i)$ of $N$ with $f(\overline{U_i})\subset V_i$ for each $i=1,\ldots,l$, $\varepsilon>0$. Define the map \[\varphi_f\colon U_{f,\varepsilon}\rightarrow \varphi_f(U_{f,\varepsilon})\subset \Gamma_{C^k}(f^*TN)\] by \[(\varphi_f(g))(p):=(\textup{exp}_{f(p)})^{-1}g(p)\] for all $p\in M$, where $\textup{exp}$ is the exponential map of $(N,h)$. Then it holds that \begin{enumerate} \item For every $\delta>0$ there exists $\varepsilon >0$ s.t. for all $g\in U_{f,\varepsilon}$ and all $p\in M$ we have \begin{align*} d(g(p),f(p))<\delta. \end{align*} In particular, $\varphi_f$ is well-defined on $U_{f,\varepsilon(\delta)}$ for $\delta< \inf_{p\in M}\textup{inj}_{f(p)}(N)$. \end{enumerate} Moreover, for $\varepsilon>0$ small enough the following is true: \begin{enumerate} \item[ii)] $\varphi_f\colon U_{f,\varepsilon}\rightarrow \varphi_f(U_{f,\varepsilon})$ is continuous (where on $U_{f,\varepsilon}$ we have the subspace topology induced from the compact-open $C^k$ topology and on $\varphi_f(U_{f,\varepsilon})$ we have the subspace topology induced from the $C^k$-norm on $\Gamma_{C^k}(f^*TN)$). \item[iii)] $\varphi_f(U_{f,\varepsilon})\subset \Gamma_{C^k}(f^*TN)$ is open. \item[iv)] $\varphi_f^{-1}\colon \varphi_f(U_{f,\varepsilon})\rightarrow U_{f,\varepsilon}$ is continuous. \end{enumerate} \end{lemma} \begin{proof} We start by mentioning that since $C^k(M,N)$ and $\Gamma_{C^k}(f^*TN)$ are first-countable, it is sufficient to show that $\varphi_f$ and $\varphi_f^{-1}$ are sequentially continuous. To make the proofs of i) and ii) easier, we first choose the $V_i$ s.t. \[(A)\left\{ \begin{array}{l} \psi_i(V_i) \text{ is convex and compact, }\overline{V_i}\subset\tilde{V}_i \text{ where } (\tilde{V}_i,\psi_i) \text{ is still a chart of }N\text{,}\\\tilde{V}_i\times \tilde{V}_i\subset E(W_i), \text{where }W_i\subset TN\text{ and}\\ E(W_i)\subset N\times N\text{ are open s.t. } \\E\colon W_i\rightarrow E(W_i) \text{ is a diffeomorphism},\\ (\tilde{V}_i, \hat{\Phi}_i) \text{ are local trivializations of } TN \text{ with induced local}\\\text{trivialization } (f^{-1}(\tilde{V_i}),\Phi_i) \text{ of } f^*TN \text{ for each } i=1,\ldots l. \end{array} \right.\] (See Lemma \ref{lemma eig exp} ii).) In the following we prove i) and ii) with the additional assumption $(A)$ and then show afterwards that we don't need it, provided that $\varepsilon>0$ is small enough. \textbf{Proof of i):} It is not difficult to see that for every $\delta>0$ there exists $\varepsilon>0$ s.t. for all $g\in U_{f,\varepsilon}$ and all $p\in M$ we have \[d(g(p),f(p))<\delta.\] Choosing $\delta<\inf_{p\in M}\textup{inj}_{f(p)}(N)$ we have that $(\textup{exp}_{f(p)})^{-1}g(p)$ exists for each $p\in M$ and all $g\in U_{f,\varepsilon(\delta)}$. Moreover, $\varphi_f(g)\in\Gamma_{C^k}(f^*TN)$, since on $U_i$ it holds that $\varphi_f(g)=(E|_{W_i})^{-1}\circ(f,g)$. We have shown that $\varphi_f$ is a well-defined map. \textbf{Proof of ii):} Choose $\varepsilon$ so small that $\varphi_f$ is well-defined on $U_{f,\varepsilon}$, see i). Let $(g_m)_{m\in \mathbb{N}}$ be a sequence in $U_{f,\varepsilon}$, $g\in U_{f,\varepsilon}$ with $g_m\xrightarrow{m\to\infty}g$ in $U_{f,\varepsilon}$. In particular, for each $r>0$ there exists $N=N(r)\in\mathbb{N}$ s.t. \[g_m\in\bigcap_{i=1}^l\mathcal{N}^k(g,\varphi_i,\tilde{U}_i,\psi_i,V_i,\overline{U_i},r)\] for all $m\ge N$. (We note that the $\varphi_i,\tilde{U}_i,\psi_i,V_i,U_i$ are the same as in the statement of the lemma where we additionally assume $(A)$ as mentioned above.) That means, that for all $i=1,\ldots,l$ we have \[\|\psi_i\circ g_m\circ \varphi_i^{-1}-\psi_i\circ g\circ \varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\xrightarrow{m\to\infty}0\] where $n=\textup{dim}(N)$. Using Lemma \ref{lemma Absch Ck norm verknuepfung} \footnote{For $f_1=(\psi_i\times \psi_i)\circ (f,g)\circ \varphi_i^{-1}$ defined on $\varphi_i(\tilde{U}_i\cap f^{-1}(\tilde{V}_i)\cap g^{-1}(\tilde{V}_i))$, $f_2=(\psi_i\times \psi_i)\circ (f,g_m)\circ \varphi_i^{-1}$ defined on $\varphi_i(\tilde{U}_i\cap f^{-1}(\tilde{V}_i)\cap g_m^{-1}(\tilde{V}_i))$, and for $\Psi=pr_2\circ\hat{\Phi}_i\circ E|_{W_i}^{-1}\circ(\psi_i^{-1}\times\psi_i^{-1})$, defined on $\psi_i(\tilde{V}_i)\times\psi_i(\tilde{V}_i)$, $K=\overline{\varphi_i(U_i)}$, and $\tilde{K}=\psi_i(\overline{V_i})\times\psi_i(\overline{V_i})$.} we calculate for each $i=1,\ldots,l$ \begin{align*} &\|pr_2\circ\Phi_i\circ\varphi_f(g_m)\circ\varphi_i^{-1}-pr_2\circ\Phi_i\circ\varphi_f(g)\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &=\|pr_2\circ\hat{\Phi}_i\circ E|_{W_i}^{-1}\circ(f,g_m)\circ\varphi_i^{-1}\\ &\hphantom{=}-pr_2\circ\hat{\Phi}_i\circ E|_{W_i}^{-1}\circ(f,g)\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &=\|\left(pr_2\circ\hat{\Phi}_i\circ E|_{W_i}^{-1}\circ(\psi_i^{-1}\times\psi_i^{-1})\right)\circ\left((\psi_i\times\psi_i)\circ(f,g_m)\circ\varphi_i^{-1}\right)\\ &\hphantom{=}-\left(pr_2\circ\hat{\Phi}_i\circ E|_{W_i}^{-1}\circ(\psi_i^{-1}\times\psi_i^{-1})\right)\circ\left((\psi_i\times\psi_i)\circ(f,g)\circ\varphi_i^{-1}\right)\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &\le C_i\|(\psi_i\times\psi_i)\circ(f,g_m)\circ\varphi_i^{-1}-(\psi_i\times\psi_i)\circ(f,g)\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n\times\mathbb{R}^n)}\\ &=C_i\|\psi_i\circ g_m\circ\varphi_i^{-1}-\psi_i\circ g\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\xrightarrow{n\to\infty}0, \end{align*} i.e., \[\|\varphi_f(g_m)-\varphi_f(g)\|_{C^k}\xrightarrow{m\to\infty}0.\] Hence, $\varphi_f\colon U_{f,\varepsilon}\rightarrow \varphi_f(U_{f,\varepsilon})$ is continuous. We have shown i) and ii) under the additional assumption $(A)$. Now we show that we don't need the assumption $(A)$, provided that $\varepsilon>0$ is small enough. To that end, choose $(U_i',\varphi_i')$ charts of $M$, $i=1,\ldots,m$, $\bigcup_{i=1}^mU_i'=M$, s.t. $\overline{U_i'}\subset M$ is compact, $\overline{U_i'}\subset\tilde{U}_i'$, $(\tilde{U}_i',\varphi_i')$ is still a chart of $M$ and charts $(V_i',\psi_i')$ of $N$ with $f(\overline{U_i'})\subset V_i'$ for each $i=1,\ldots,m$. Using Lemma \ref{lemma weak top properties} we choose $\varepsilon'>0$ s.t. \[\bigcap_{i=1}^m\mathcal{N}^k(f,\varphi_i',\tilde{U}_i',\psi_i',V_i',\overline{U_i}',\varepsilon')\subset \bigcap_{i=1}^l\mathcal{N}^k(f,\varphi_i,\tilde{U}_i,\psi_i,V_i,\overline{U_i},\varepsilon)\] where the $\varphi_i,\tilde{U}_i,\psi_i,V_i,U_i$ are the same as in the statement of the lemma and satisfy $(A)$. Since $\varphi_f$ is well-defined and continuous on the set $\bigcap_{i=1}^l\mathcal{N}^k(f,\varphi_i,\tilde{U}_i,\psi_i,V_i,\overline{U_i},\varepsilon)$ (that is what we have shown above) it is obviously well-defined and continuous on the subset $\bigcap_{i=1}^m\mathcal{N}^k(f,\varphi_i',\tilde{U}_i',\psi_i',V_i',\overline{U_i}',\varepsilon')$. \textbf{Proof of iii) and iv):} Choose $\varepsilon$ so small that $\varphi_f$ is well-defined on $U_{f,\varepsilon}$ and ii) holds. From Lemma \ref{lemma eig exp} iii) we see that $\varphi_f(U_{f,\varepsilon})\subset U:=\{s\in\Gamma_{C^k}(f^*TN)\text{ }|\text{ }\|s(p)\|_h<\delta\text{ for all }p\in M\}$. First we prove that $U$ is open in $\Gamma_{C^k}(f^*TN)$. To that end, let $s_0\in U$. Since the function $M\rightarrow \mathbb{R}$, $p\mapsto \|s_0(p)\|_h$, is continuous and $M$ is compact, we have $\delta_0:=\max_{p\in M}\|s_0(p)\|_h<\delta$. Comparing $h$ to the Euclidean norm in the trivialization it is easy to verify that there exists $C>0$ s.t. \[\|s(p)-s_0(p)\|_h\le C\|s-s_0\|_{C^k}\] for all $s\in\Gamma_{C^k}(f^*TN)$ and all $p\in M$. Choose $r>0$ s.t. $Cr<\delta-\delta_0$. If $\|s-s_0\|_{C^k}<r$, then \[\|s(p)\|_h\le \|s(p)-s_0(p)\|_h+\|s_0(p)\|_h\le C\|s-s_0\|_{C^k} +\delta_0 < Cr+\delta_0<\delta\] for all $p\in M$, therefore $U$ is open in $\Gamma_{C^k}(f^*TN)$. Next we show that the well-defined map \[H\colon U\rightarrow C^k(M,N),\] $(H(s))(p):=\textup{exp}_{f(p)}s(p)$ is continuous. Then we have in particular that $\varphi_f^{-1}=H|_{\varphi_f(U_{f,\varepsilon})}$ is continuous and that $\varphi_f(U_{f,\varepsilon})=H^{-1}(U_f)$ is open in $U$ (and therefore also in $\Gamma_{C^k}(f^*TN)$). To show continuity of $H$, choose charts $(U_i,\varphi_i)$ of $M$, $i=1,\ldots,l$, $\bigcup_{i=1}^lU_i=M$ s.t. $\overline{U_i}\subset M$ is compact, $\overline{U_i}\subset \tilde{U_i}$, $(\tilde{U_i},\varphi_i)$ is still a chart of $M$ and there are local trivializations $(\tilde{U}_i,\Phi_i)$ of $f^*TN$ and charts $(V_i,\psi_i)$ of $N$ with $f(\overline{U_i})\subset V_i$ and $(B_{\delta}(V_i),\psi_i)$ is still a chart of $N$ for each $i=1,\ldots,l$, where $B_{\delta}(V_i)=\{p\in N\text{ }|\text{ } \exists q\in V_i:\text{ }d(p,q)<\delta\}$. (Note that the $\varphi_i,\tilde{U}_i,\psi_i,V_i,U_i$ here don't need to be the same as in the statement of the lemma.) Let $(s_m)_{m\in \mathbb{N}}$ be a sequence in $U$, $s\in U$, with \[\|s_m-s\|_{C^k}\xrightarrow{m\to\infty}0,\] i.e., \begin{align*} \|pr_2\circ \Phi_i\circ s_m\circ\varphi_i^{-1}-pr_2\circ\Phi_i\circ s\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\xrightarrow{m\to\infty}0 \end{align*} for each $i=1,\ldots,l$. For showing $H(s_m)\xrightarrow{m\to\infty}H(s)$ in $C^k(M,N)$ it is sufficient to show that for all $r >0$ there exists $N=N(r)\in\mathbb{N}$ s.t. \[H(s_m)\in\bigcap_{i=1}^l\mathcal{N}^k(H(s),\varphi_i,\tilde{U}_i,\psi_i,B_\delta(V_i),\overline{U_i},r)\] for all $m\ge N$, see Lemma \ref{lemma weak top properties}. First of all, by definition of $H$ and Lemma \ref{lemma eig exp} iii) it holds that \[d(H(s_m)(p),f(p))<\delta \text{ for all }m\in\mathbb{N}\text{ and }d(H(s)(p),f(p))<\delta\] for each $p\in M$. Since $f(\overline{U_i})\subset V_i$ it follows that $(H(s_m)(\overline{U_i})\subset B_\delta(V_i)$ and $(H(s)(\overline{U_i})\subset B_\delta(V_i)$ for each $m\in\mathbb{N}$ and $i=1,\ldots,l$. Let $r >0$. Using Lemma \ref{lemma Absch Ck norm verknuepfung} \footnote{For $f_1=(\varphi_i\times id)\circ \Phi_i\circ s\circ \varphi_i^{-1}$ defined on $\varphi_i(\tilde{U}_i\cap f^{-1}(B_\delta(V_i))$, $f_2=(\varphi_i\times id)\circ \Phi_i\circ s_m\circ \varphi_i^{-1}$ also defined on $\varphi_i(\tilde{U}_i\cap f^{-1}(B_\delta(V_i)))$, $\Psi=\psi_i\circ f^*\textup{exp}\circ \Phi_i^{-1}\circ (\varphi_i^{-1}\times id)$ defined on $(\varphi_i\times id)\circ \Phi_i\left(\{v\in f^*TN\text{ }|\text{ }\|v\|<\delta \}\cap f^*TN|_{\tilde{U}_i\cap f^{-1}(B_\delta(V_i))}\right)$, $K=\overline{\varphi_i(U_i)}$, and $\tilde{K}=(\varphi_i\times id)\circ \Phi_i\left(\{v\in f^*TN\text{ }|\text{ }\|v\|\le\delta \}\cap f^*TN|_{\overline{U}_i\cap f^{-1}(\overline{V_i})}\right)$.} we calculate for each $i=1,\ldots,l$ and $m$ large enough \begin{align*} &\|\psi_i\circ H(s_m)\circ\varphi_i^{-1}-\psi_i\circ H(s)\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &=\|\psi_i\circ f^*\textup{exp}\circ s_m\circ\varphi_i^{-1}-\psi_i\circ f^*\textup{exp}\circ s\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &=\|\left(\psi_i\circ f^*\textup{exp}\circ\Phi_i^{-1}\circ(\varphi_i^{-1}\times id)\right)\circ\left((\varphi_i\times id)\circ\Phi_i\circ s_m\circ\varphi_i^{-1}\right)\\ &-\left(\psi_i\circ f^*\textup{exp}\circ\Phi_i^{-1}\circ(\varphi_i^{-1}\times id)\right)\circ\left((\varphi_i\times id)\circ\Phi_i\circ s\circ\varphi_i^{-1}\right)\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &\le C_i\|(\varphi_i\times id)\circ\Phi_i\circ s_m\circ\varphi_i^{-1}-(\varphi_i\times id)\circ\Phi_i\circ s\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n\times\mathbb{R}^n)}\\ &=C_i\|pr_2\circ\Phi_i\circ s_m\circ\varphi_i^{-1}-pr_2\circ\Phi_i\circ s\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}\\ &<r, \end{align*} where $(f^*\textup{exp})(v):=\textup{exp}_{f(p)}v$ for $v\in(f^*TN)_p$, $p\in M$. We have shown \[H(s_m)\in\bigcap_{i=1}^l\mathcal{N}^k(H(s),\varphi_i,\tilde{U}_i,\psi_i,B_\delta(V_i),\overline{U_i},r)\] for $m$ large enough, so $H\colon U\rightarrow C^k(M,N)$ is continuous. \end{proof} \section{The smooth structure on $C^k(M,N)$} In the following we ``globalize'' the local $\Omega$-lemma (Lemma \ref{lemma loc omega}) to sections of vector bundles. This will be the main input for showing that $C^k(M,N)$ carries a \textit{smooth} structure. We start with a proposition that provides a criterion for a map with target $\Gamma_{C^k}(E)$ to be a $C^r$-map. \begin{proposition}\label{proposition 1} In the situation of Definition \ref{def C^k norm vrb}, we define \[R_i\colon\Gamma_{C^k}(E)\rightarrow C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)\] by $R_i(s):=pr_2\circ\Phi_i\circ s\circ\varphi_i^{-1}$ for $i=1,\ldots,l$, where we assume that $\textup{rank}(E)=n$. Let $r\in\mathbb{N}$, $X$ a Banach space, $U\subset X$ open, and $$F\colon U\rightarrow \Gamma_{C^k}(E)$$ a map. Then $F\in C^r(U,\Gamma_{C^k}(E))$ if and only if $R_i\circ F\in C^r (U,C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n))$ for $i=1,\ldots,l$. \end{proposition} \begin{proof}[Sketch of proof]``$\Rightarrow$:'' The $R_i$ are linear and continuous, so the are smooth.\\ ``$\Leftarrow$:'' To make things easier, we first get rid of the $\Phi_i$ and $\varphi_i$ in $R_i\circ F$ as follows: On the vector space \[\Gamma_{C^k,\overline{U}_i}(E):=\{s\colon U_i\rightarrow E\text{ }|\text{ } s\in\Gamma_{C^k}(E|_{U_i})\text{ and } pr_2\circ\Phi_i\circ s\circ \varphi_i^{-1}\in C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)\}\] we define the norm \[\|s\|_i:=\|pr_2\circ\Phi_i\circ s\circ\varphi_i^{-1}\|_{C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)}.\] We get isomorphisms of Banach spaces \begin{align*} J_i\colon \Gamma_{C^k,\overline{U}_i}(E)&\rightarrow C^k(\overline{\varphi_i(U_i)},\mathbb{R}^n)\\ s&\mapsto pr_2\circ\Phi_i\circ s\circ\varphi_i^{-1}, \end{align*} with $J_i^{-1}(f)=\Phi_i^{-1}(id_{U_i},f\circ\varphi_i)$. By assumption, we have that \begin{align*} F_i:=J_i^{-1}\circ R_i\circ F\colon U&\rightarrow \Gamma_{C^k,\overline{U}_i}(E),\\ x&\mapsto F(x)|_{U_i} \end{align*} is an element of $C^r(U,\Gamma_{C^k,\overline{U}_i}(E))$ for $i=1,\ldots,l$. Define \[\tilde{D}^jF\colon U\rightarrow L^j_s(X,\Gamma_{C^k}(E))\] by \[\left(\tilde{D}^jF\right)_u(x_1,\ldots,x_j)|_{U_i}:=\left(D^jF_i\right)_u(x_1,\ldots,x_j)\] for $u\in U$, $x_1,\ldots,x_j\in X$, $j=1,\ldots,r$. Inductively, one can show that $\tilde{D}^jF$ is well-defined, continuous, and $F$ is $r$ times continuously differentiable with $D^jF=\tilde{D}^jF$ for $j=1,\ldots,r$. Details can be found in \cite[Proof of Proposition 3.4.1.]{JWDissertation}. \end{proof} \begin{lemma}[global $\Omega$-lemma] \label{lemma glob omega}Let $r,k\in\mathbb{N}$. Let $M$ be a closed manifold of dimension $m$. Let $E\to M$ be a $C^k$ vector bundle of rank $n$, and let $h$ be a bundle metric on $E$. Choose $U_i,\tilde{U}_i,\hat{U}_i,\varphi_i,\Phi_i$, $i=1,\ldots,l$ as in Definition \ref{def C^k norm vrb} and s.t. the $\Phi_i$ are isometries on the fibers. Let $\delta>0$ and define the open subset $U\subset E$ by \[U:=\{v\in E\text{ }|\text{ }\|v\|_h<\delta\}.\] Let $F\to M$ be a $C^k$ vector bundle of rank $d$ with local trivializations $(\hat{U}_i,\tilde{\Phi}_i)$, $i=1,\ldots,l$, and \[f\colon U\rightarrow F\] a map s.t. \begin{enumerate} \item $f$ is fiber-preserving and \item the maps \[g_i\colon \varphi_i(U_i)\times B_\delta(0)\rightarrow \mathbb{R}^d\] defined by $$g_i(x,v):=\left(pr_2\circ\tilde{\Phi}_i\circ f\circ\Phi_i^{-1}\circ (\varphi_i^{-1},id)\right)(x,v)$$ for $i=1,\ldots,l$ and $B_\delta(0)\subset\mathbb{R}^n$ the open ball in $\mathbb{R}^n$ of radius $\delta$ and center $0$, satisfy $$g_i\in C^k(\overline{\varphi_i(U_i)\times B_\delta(0)},\mathbb{R}^d)$$ and for each $j=0,\ldots,r$ the map \[D^j_2g_i\colon\varphi_i(U_i)\times B_\delta(0)\rightarrow L^j_s(\mathbb{R}^m,\mathbb{R}^d)\] defined by $(D^j_2g_i)(x,y):=(D^j(g_i(x,.)))(y)$ for all $(x,y)\in\varphi_i(U_i)\times B_\delta(0)$ exists and is an element of $C^k(\overline{\varphi_i(U_i)\times B_\delta(0)},L^j_s(\mathbb{R}^m,\mathbb{R}^d))$. \end{enumerate} Then the map \begin{align*} \Omega_f\colon \Gamma_{C^k}(E)^U&\rightarrow \Gamma_{C^k}(F),\\ s&\mapsto f\circ s, \end{align*} is an element of $C^r(\Gamma_{C^k}(E)^U,\Gamma_{C^k}(F))$ where $\Gamma_{C^k}(E)^U\subset\Gamma_{C^k}(E)$ is the open subset of $C^k$-sections of $E$ with image contained in $U$. If $r\ge 1$, then \begin{align}\label{eq2} \left(\left(D\Omega_f\right)_{s_0}s\right)(p)=(D(f|_{E_p\cap U}))_{s_0(p)}s(p) \end{align} for all $p\in M$, $s_0\in\Gamma_{C^k}(E)^U$, and all $s\in\Gamma_{C^k}(E)$. \end{lemma} A different version of the global $\Omega$-lemma can be found in \cite[Theorem 5.9]{Gl}. (Note that in \cite[Theorem 5.9]{Gl} it is a requirement that the considered map $f$ maps the zero element of each fiber onto itself, $f(0_x)=0_x$. This makes it problematic to apply \cite[Theorem 5.9]{Gl} in our setting, since we will consider maps of the form $v\mapsto \textup{exp}_{g(p)}^{-1}\circ \textup{exp}_{f(p)}v$.) \begin{remark} \ \begin{enumerate} \item Note that in the situation of Lemma \ref{lemma glob omega} ii), the statement \[g_i\in C^k(\overline{\varphi_i(U_i)\times B_\delta(0)},\mathbb{R}^d)\text{ and } D^j_2g_i\in C^k(\overline{\varphi_i(U_i)\times B_\delta(0)},L^j_s(\mathbb{R}^m,\mathbb{R}^d))\hspace{3em}\] for $j=0,\ldots,r$ is equivalent to the statement that \[\partial^\alpha_y\partial^\beta_xg_i\colon\varphi_i(U_i)\times B_\delta(0)\rightarrow\mathbb{R}^d\] are continuous and continuously extendable to $\overline{\varphi_i(U_i)\times B_\delta(0)}$ for all $|\alpha|\le k+r$, $|\beta|\le k$, s.t. $|\alpha+\beta|\le k+r$, where $x$ denotes the ``$\varphi_i(U_i)$-direction'' and $y$ denotes the ``$B_\delta(0)$-direction''. \item The assumptions of Lemma \ref{lemma glob omega} ii) imply in particular that $\Omega_f$ is well-defined as a map: from ii) we see that $pr_2\circ\tilde{\Phi}_i\circ f\colon U\cap E|_{U_i}\rightarrow\mathbb{R}^d$ is $C^k$. It follows that $f(v)=\tilde{\Phi}_i^{-1}\circ (\pi,pr_2\circ\tilde{\Phi}_i)(v)$ for all $v\in U\cap E|_{U_i}$, where $\pi\colon E\rightarrow M$ is the projection of $E$, so $f\in C^k(U\cap E|_{U_i},F)$. Since the $U_i$ cover $M$, we have $f\in C^k(U,F)$ and thus $f\circ s\in \Gamma_{C^k}(F)$ for all $s\in \Gamma_{C^k}(E)^U$. \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma \ref{lemma glob omega}] For each $i=1,\ldots,l$ we have a commutative diagram \[ \begin{xy} \xymatrix{ \Gamma_{C^k}(E)^U \ar[r]^{\Omega_f} \ar[dd]_{R_i} & \Gamma_{C^k}(F) \ar[rr]^{\tilde{R}_i} && C^k(\overline{\varphi_i(U_i)},\mathbb{R}^d) \\ \\ C^k(\overline{\varphi_i(U_i)},B_\delta(0)) \ar[rrruu]_{\Omega_{g_i}} } \end{xy} \] where $R_i(s):=pr_2\circ\Phi_i\circ s\circ\varphi_i^{-1}$, $\tilde{R}_i(s):=pr_2\circ\tilde{\Phi}_i\circ s\circ\varphi_i^{-1}$, and $\Omega_{g_i}(h)=g\circ (id\times h)$. From Proposition \ref{proposition 1} we see that $\Omega_f$ is $C^r$ iff $\tilde{R}_i\circ\Omega_f$ is $C^r$. Moreover, $\tilde{R}_i\circ\Omega_f=\Omega_{g_i}\circ R_i$ is $C^r$ because of Lemma \ref{lemma loc omega}, thus $\Omega_f$ is $C^r$. Equation \eqref{eq2} can be shown by differentiating the above commutative diagram and using equation \eqref{eq4} from Lemma \ref{lemma loc omega}. \end{proof} \begin{theorem}[$C^k(M,N)$ as a Banach manifold]\label{theorem ckmn banach mf} Let $k\in\mathbb{N}$. Let $M$ and $N$ be manifolds without boundary. Let $M$ be compact and let $N$ be connected. Choose a Riemannian metric $h$ on $N$. Then the topological space $C^k(M,N)$ (i.e., the set $C^k(M,N)$ equipped with the compact-open $C^k$ topology) has the structure of a smooth Banach manifold such that the following holds: for any $f\in C^k(M,N)$ and any small enough open neighborhood $U_f$ of $f$ in $C^k(M,N)$ there is an open neighborhood $V_f$ of the zero section in $\Gamma_{C^k}(f^*TN)$ such that the map \begin{align*} \varphi_f\colon U_f&\to V_f,\\ g&\mapsto \textup{exp}^{-1}\circ (f,g), \end{align*} i.e., $\varphi_f(g)(p)=(\textup{exp}_{f(p)})^{-1}g(p)$ for all $g\in U_f$, $p\in M$, is a local chart (in particular a smooth diffeomorphism). Note that the inverse of $\varphi_f$ is given by $$\varphi_f^{-1}(s)(p)=\textup{exp}_{f(p)}s(p)$$ for all $s\in V_f$, $p\in M$. This smooth structure does not depend on the choice of the Riemannian metric $h$ on $N$. Moreover, for all $f,g\in C^k(M,N)$ s.t. $U_{f}\cap U_{g}\neq\varnothing$ it holds that \begin{align}\label{eq 111} \left(D(\varphi_g\circ\varphi_f^{-1})_{s_0}s\right)(p)=D(\textup{exp}_{g(p)}^{-1}\circ \textup{exp}_{f(p)})_{s_0(p)}s(p) \end{align} for all $p\in M$, $s_0\in\varphi_f(U_{f}\cap U_{g})$, $s\in\Gamma_{C^k}(f^*TN)$. \end{theorem} \begin{proof}For $f\in C^k(M,N)$ we denote by $U_{f,\varepsilon}$ the set defined in Lemma \ref{lemma charts c^kMN are homeo}. First we show that for $U_{f,\varepsilon^f}\cap U_{g,\varepsilon^g}\neq \varnothing$ the transition map $\varphi_g\circ\varphi_f^{-1}$ is smooth. We use a strategy similar to the proofs of Lemma \ref{lemma charts c^kMN are homeo} i)-ii). To be more precise, we first show the statement holds for sets $U_{f,\varepsilon^f}$ with some additional assumptions on the charts in the definition of $U_{f,\varepsilon^f}$. We will call these sets $U_{f,\varepsilon^f}^\text{add.}$. Then we show that we don't need these additional assumptions, provided that $\varepsilon^f>0$ and $\varepsilon^g>0$ are small enough. We start by defining the sets $U_{f,\varepsilon^f}^{\text{add.}}$, that is, we formulate which additional assumptions we make on the charts in the definition of $U_{f,\varepsilon^f}$. Let $f\in C^k(M,N)$. Choose charts $(U_i^f,\varphi_i^f)$ of $M$, $i=1,\ldots,l=l(f)$, $\bigcup_{i=1}^lU_i^f=M$, s.t. $\overline{U_i^f}\subset M$ is compact, $\overline{U_i^f}\subset\tilde{U}_i^f$, $(\tilde{U}_i^f,\varphi_i^f)$ is still a chart of $M$, $f(\overline{U_i^f})\subset V_i^f$, $(V_i^f,\psi_i)$ chart of $N$, $\overline{V_i^f}\subset N$ is compact, $\overline{V_i^f}\subset\tilde{V}_i^f$, $\overline{\tilde{V}_i^f}\subset N$ is compact, and $(\tilde{V}_i^f,\hat{\Phi}_i^f)$ is a local trivialization of $TN$ which is an isometry on fibers for $i=1,\ldots,l$. Choose $$0<r^f<\min_{i=1,\ldots,l(f)}\inf_{q\in \overline{V_i^f}}\textup{inj}_q(N)$$ s.t. $E$ is a diffeomorphism from the set \[X_i^f:=\{v\in TN\text{ }|\text{ } \pi_{TN}(v)\in V_i^f, \text{ }\|v\|_h<r^f\}\] onto its image. Denote by $(f^{-1}(\tilde{V}_i^f),\Phi_i^f)$ the local trivialization of $f^*TN$ induced by $(\tilde{V}_i^f,\hat{\Phi}_i^f)$. Now we define the set \[U_{f,\varepsilon^f}^{\text{add.}}:=\bigcap_{i=1}^l\mathcal{N}^k(f,\varphi_i^f,\tilde{U}_i^f,\psi_i^f,V_i^f,\overline{U_i^f},\varepsilon^f)\] where $\varepsilon^f =\varepsilon^f(\delta^f)>0$ is chosen s.t. Lemma \ref{lemma charts c^kMN are homeo} i)-iii) hold (for $U_{f,\varepsilon^f}^{\text{add.}}$) where $$\delta^f<\frac{r^f}{6}.$$ Assume that $U_{f,\varepsilon^f}^{\text{add.}}\cap U_{g,\varepsilon^g}^{\text{add.}}\neq\varnothing$. Define \[U:=\{v\in f^*TN\text{ }|\text{ } \|v\|_{f^*h}<2\delta^f\}\] and \[F\colon U\rightarrow g^*TN\] by \[F(v):=\left((\textup{exp}_{g(p)})^{-1}\circ\textup{exp}_{f(p)}\right)(v)\] for $v\in U\cap T_{f(p)}N$. After possibly interchanging the roles of $f$ and $g$, we may assume \begin{align}\label{eq 11111} \delta^f\le \delta^g. \end{align} (It is important to note here, that \eqref{eq 11111} is achieved by possibly interchanging $f$ and $g$. It is \textit{not} achieved by choosing $\delta^f$ so small, that \eqref{eq 11111} holds. The latter would mean that $\delta^f$ also depends on $g$ and then some of the arguments below no longer work.) Then \begin{align} \label{eqqqq 3} F(v)=E|_{X_j^g}^{-1}(g(p),\textup{exp}_{f(p)}v) \end{align} for all $v\in U\cap T_{f(p)}N$, where $p\in U_j^g$. Hence, $F$ is well-defined. To show \eqref{eqqqq 3}, let $v\in U\cap T_{f(p)}N$ and $p\in U_j^g$. Then $\textup{exp}_{f(p)}v\in B_{2\delta^f}(f(p))$. Since $U_{f,\varepsilon^f}^{\text{add.}}\cap U_{g,\varepsilon^g}^{\text{add.}}\neq\varnothing$, we have $d(f(p),g(p))<\delta^f+\delta^g$ by the triangle inequality. Therefore, $\textup{exp}_{f(p)}v\in B_{3\delta^f+\delta^g}(g(p))$. Since $\delta^f\le \delta^g$ we have $3\delta^f+\delta^g\le 4\delta^g<\frac46 r^g<\frac46\textup{inj}_{g(p)}(N)$. From this it is easy to see that \eqref{eqqqq 3} holds. Now we want to use Lemma \ref{lemma glob omega} to show that \begin{align*} \Omega_F\colon \Gamma_{C^k}(f^*TN)^U&\rightarrow \Gamma_{C^k}(g^*TN),\\ s&\mapsto F\circ s, \end{align*} is (well-defined and) smooth. If we have shown that, then we have in particular that $\varphi_g\circ\varphi_f^{-1}=\Omega_F|_{\varphi_f(U_{f,\varepsilon^f}^{\text{add.}}\cap U_{g,\varepsilon^g}^{\text{add.}})}$ is smooth. Condition i) of Lemma \ref{lemma glob omega} is satisfied, as $F$ is fiber-preserving. Now we show Condition ii): To that end, we consider the maps \[g_{ij}\colon pr_2\circ \Phi_j^g\circ F\circ (\Phi_i^f)^{-1}\circ\left((\varphi_i^f)^{-1},id\right)\colon \varphi_i^f(U_i^f\cap U_j^g)\times B_{2\delta^f}(0)\rightarrow \mathbb{R}^n,\] where $n=\textup{dim}(N)$ and the maps \[H_{ij}\colon Y_{ij}\rightarrow TN\] where $Y_{ij}$ is the non-empty open set \[Y_{ij}:=\{(q_1,q_2,y)\in V_i^f\times V_j^g\times B_{2\delta^f}(0)\text{ }|\text{ } q_2\in B_{2\delta^f+\delta^g}(q_1)\}\] and \[H_{ij}(q_1,q_2,y):=\left((\textup{exp}_{q_2})^{-1}\circ \textup{exp}_{q_1}\right)\left((\hat{\Phi}_i^f)^{-1}(q_1,y)\right).\] Note that $H_{ij}$ is well-defined and smooth (on $Y_{ij}$) since under our assumption \eqref{eq 11111} it holds that \[H_{ij}(q_1,q_2,y)=E|_{X_j^g}^{-1}(q_2,\textup{exp}_{q_1}((\hat{\Phi}_i^f)^{-1}(q_1,y)))\] on $Y_{ij}$. Moreover, we have \begin{align}\label{eq6} pr_2\circ\Phi_j^g\circ H_{ij}\circ\left((f,g)\circ(\varphi_i^f)^{-1},id\right)=g_{ij} \end{align} on $\varphi_i^f(U_i^f\cap U_j^g)\times B_{2\delta^f}(0)$. Given any multiindex $\alpha$ we see from \eqref{eq6} that \[\partial^\alpha_yg_{ij}(x,y)=(pr_2\circ\Phi_j^g)\left(\left(\partial^\alpha_yH_{ij}\right)\left(f((\varphi_i^f)^{-1}(x)),g((\varphi_i^f)^{-1}(x)),y)\right)\right)\] for all $(x,y)\in \varphi_i^f(U_i^f\cap U_j^g)\times B_{2\delta^f}(0)$, so $\partial^\alpha_yg_{ij}$ is $C^k$ in $(x,y)$. In particular, for $|\beta|\le k$, we have that $\partial^\beta_x\partial^\alpha_y g_{ij}$ is continuous on $\overline{\varphi_i^f(U_i^f\cap U_j^g)\times B_{\delta^f}(0)}$. We have shown Conditions i) and ii) of Lemma \ref{lemma glob omega}, which we now apply to deduce that $\Omega_F\colon \Gamma_{C^k}(f^*TN)^U\rightarrow \Gamma_{C^k}(g^*TN)$ is smooth. In particular, $\varphi_g\circ\varphi_f^{-1}=\Omega_F|_{\varphi_f(U_{f,\varepsilon^f}^{\text{add.}}\cap U_{g,\varepsilon^g}^{\text{add.}})}$ is smooth. Next we show that we don't need the additional assumptions we made in the definition of the sets $U_{f,\varepsilon^f}^{\text{add.}}$. For arbitrary $U_{f,\varepsilon}$ and $U_{g,\tilde{\varepsilon}}$ (defined as in Lemma \ref{lemma charts c^kMN are homeo}) there exist $U_{f,\varepsilon^f}^{\text{add.}}$ and $U_{g,\tilde{\varepsilon}^g}^{\text{add.}}$ with $U_{f,\varepsilon}\subset U_{f,\varepsilon^f}^{\text{add.}}$ and $U_{g,\tilde{\varepsilon}}\subset U_{g,\tilde{\varepsilon}^g}^{\text{add.}}$, provided that $\varepsilon>0$ and $\tilde{\varepsilon}>0$ are small enough (see Lemma \ref{lemma weak top properties}). If $U_{f,\varepsilon}\cap U_{g,\tilde{\varepsilon}}\neq \varnothing$, then we have in particular $U_{f,\varepsilon^f}^{\text{add.}}\cap U_{g,\tilde{\varepsilon^g}}^{\text{add.}}\neq \varnothing$. We have shown that the transition map $\varphi_g\circ\varphi_f^{-1}$ is smooth on $\varphi_f(U_{f,\varepsilon^f}^{\text{add.}}\cap U_{g,\tilde{\varepsilon^g}}^{\text{add.}})$, so it is in particular smooth on $\varphi_f(U_{f,\varepsilon}\cap U_{g,\tilde{\varepsilon}})$. Equation \eqref{eq 111} is a direct consequence of equation \eqref{eq2}. Summing up, we have shown that for $f,g\in C^k(M,N)$ with $U_{f,\varepsilon}\cap U_{g,\tilde{\varepsilon}}\neq\varnothing$ ($\varepsilon, \tilde{\varepsilon}>0$ small enough), the transition map \[\varphi_g\circ\varphi_f^{-1}\colon \varphi_f(U_{f,\varepsilon}\cap U_{g,\tilde{\varepsilon}})\to \varphi_g(U_{f,\varepsilon}\cap U_{g,\tilde{\varepsilon}})\] is smooth \textit{after possibly interchanging the roles of $f$ and $g$}, c.f. \eqref{eq 11111}. Hence, we did not show yet that $\varphi_g\circ\varphi_f^{-1}$ is a diffemorphism. Equation \eqref{eq 111} yields that the differential of $\varphi_g\circ\varphi_f^{-1}$ at an arbitrary $s_0\in \varphi_f(U_{f,\varepsilon}\cap U_{g,\tilde{\varepsilon}})$ is a bijective, continuous linear map (between Banach spaces), hence it is a linear isomorphism. The Inverse Mapping Theorem (see e.g. \cite[Theorem 2.5.2]{MTAA}) yields that $\varphi_g\circ\varphi_f^{-1}$ is a diffeomorphism. A similar argument can be used to show that the above smooth structure does not depend on the choice of the Riemannian metric $h$ on $N$. \end{proof} \begin{remark} For our proofs it was a crucial fact that $M$ is compact. If $M$ is non-compact, the ansatz of using the exponential map of the target manifold to construct the charts still makes sense, but the question of how to topologize $C^k(M,N)$ arises. The case $k=\infty$ was worked out in \cite{KM}. Also, one can consider infinite dimensional target $N$. In the case that $N$ is a Banach manifold admitting partitions of unity, one can work with sprays to construct the charts \cite{Blue}. \end{remark} Lastly, we want to state and prove some mapping properties which can be found in e.g. \cite{Blue}. \begin{proposition}[Mapping properties] Let $k,r\in\mathbb{N}$. Let $M, N, A, Z$ be manifolds without boundary. Assume that $M$ and $A$ are compact. Moreover, assume that $N$ and $Z$ are connected. Then the following holds: \begin{enumerate} \item If $g\in C^{k+r}(N,Z)$, then the map \begin{align*} \omega_g\colon C^k(M,N)&\to C^k(M,Z),\\ f&\mapsto g\circ f, \end{align*} is $C^r$. \item If $g\in C^k(A,M)$, then the map \begin{align*} \alpha_g\colon C^k(M,N)&\to C^k(A,N),\\ f&\mapsto f\circ g, \end{align*} is smooth. \end{enumerate} \end{proposition} \begin{proof} To show assertion i) of the proposition, one uses the local charts from Theorem \ref{theorem ckmn banach mf} to write down the local representative of $\omega_g$ to which Lemma \ref{lemma glob omega} is applied. Assertion ii) of the proposition is shown in a similar manner. First, one uses the charts of $C^k(M,N)$ to reduce to the situation of maps between sections of vector bundles. Using local trivializations of these vector bundles, the problem is further reduced to the following statement: Let $Y$ be a Banach space. Let $U\subset \mathbb{R}^n$ and $V\subset\mathbb{R}^m$ be open and bounded. Given $g\in C^k(\overline{V}, U)$, the map \begin{align*} \tilde{\alpha}_g\colon C^k(\overline{U},Y)&\to C^k(\overline{V},Y),\\ f&\mapsto f\circ g, \end{align*} is smooth. This however is clear, since $\tilde{\alpha}_g$ is linear and continuous. \end{proof} \subsection*{Acknowledgments}The author would like to thank Bernd Ammann and Olaf Müller for interesting discussions about this topic. We thank Peter Michor and Alexander Schmeding for providing useful references and comments. The author's work was supported by the DFG Graduiertenkolleg GRK 1692 ``Curvature, Cycles, and Cohomology''. \end{document}
arXiv
A 3D localisation method in indoor environments for virtual reality applications Wei Song ORCID: orcid.org/0000-0002-5909-96611 na1, Liying Liu1 na1, Yifei Tian1 na1, Guodong Sun2 na1, Simon Fong3 na1 & Kyungeun Cho4 na1 Human-centric Computing and Information Sciences volume 7, Article number: 39 (2017) Cite this article Virtual Reality (VR) has recently experienced rapid development for human–computer interactions. Users wearing VR headsets gain an immersive experience when interacting with a 3-dimensional (3D) world. We utilise a light detection and ranging (LiDAR) sensor to detect a 3D point cloud from the real world. To match the scale between a virtual environment and a user's real world, this paper develops a boundary wall detection method using the Hough transform algorithm. A connected-component-labelling (CCL) algorithm is applied to classify the Hough space into several distinguishable blocks that are segmented using a threshold. The four largest peaks among the segmented blocks are extracted as the parameters of the wall plane. The virtual environment is scaled to the size of the real environment. In order to synchronise the position of the user and his/her avatar in the virtual world, a wireless Kinect network is proposed for user localisation. Multiple Kinects are mounted in an indoor environment to sense the user's information from different viewpoints. The proposed method supports the omnidirectional detection of the user's position and gestures. To verify the performance of our proposed system, we developed a VR game using several Kinects and a Samsung Gear VR device. In recent years, head-mounted displays have been widely developed for Virtual Reality (VR) simulations and video games. However, due to the need to wear stereoscopic displays, users cannot view their real environment. Traditionally, the virtual environment's boundary does not match that of a user's real environment. Thus, collisions between the user and the real world always occur in VR applications and cause poor user experiences. To create an adaptive virtual environment, boundary measurement of the real environment is necessary for warnings. Currently, a light detection and ranging (LiDAR) sensor is utilised to detect the 3D point cloud of the surrounding environment. From the point cloud, large planar regions are recognised as the boundary walls [1]. In order to detect the boundary of an indoor environment, this paper develops a boundary wall detection method based on the Hough transform algorithm [2]. After the Hough transform is implemented on the LiDAR datasets, a connected-component-labelling (CCL) algorithm is applied to classify the segmented intensive regions of the Hough space into several distinguishable blocks. The corresponding Hough coordinates of the largest four peaks of the blocks are recognised as the wall plane parameters. By scaling the virtual environment to the real environmental range, the user is able to act in the virtual environment without collisions, thus enhancing the user experience. The tracking of the skeleton of a human body using RGB images and the depth sensors of the Microsoft Kinect has been widely applied for interactions between users and virtual objects in VR applications [3]. When we utilise the Kinect to acquire a user's gesture, the user needs to stand in front of the Kinect within a limited distance and face the Kinect [4]. Otherwise, weak and inaccurate signals are sensed. For omnidirectional detection, this paper proposes a multiple Kinect network using a bivariate Gaussian probability density function (PDF). In the system, multiple Kinect sensors installed in an indoor environment detect a user's gesture information from different viewpoints. The sensed datasets of the distributed clients are sent to a VR management server that selects an adaptive Kinect based on the user's distance and orientation. In our method, only small datasets of the user's position and body joints are delivered from the Kinect clients to the server; this satisfies the real-time transmission requirements [5]. The remainder of this paper is organised as follows. "Related works" section provides an overview of related works. "A 3D localisation system" section describes the 3D localisation system, including the environmental boundary walls detection method and wireless Kinect sensor network selection. "Experiments" section illustrates the experiment results. Finally, "Conclusions" section concludes this paper. To realise a virtual–physical collaboration approach, environmental recognition methods such as plane and feature detection have been researched [6]. Zucchelli et al. [7] detected planes from stereo images using a motion-based segmentation algorithm. The planar parameters were extracted automatically with projective distortions. The traditional Hough transform was usually used to detect straight lines and geometric shapes from the images. Trucco et al. [8] detected the planes from the disparity space using a Hough-like algorithm. Using these methods, matching errors were caused when the outliers overlapped with the plane regions. To detect continuous planes, Hulik et al. [9] optimised a 3D Hough transform to extract large planes from LiDAR and Kinect RGB-D datasets. Using a Gaussian smoothing function, the noise in the Hough space was removed to preserve the accuracy of the plane detection process. In order to speed up the Hough space updating process, a caching technique was applied for point registration. Compared with the traditional plane detection algorithm Compared Random sample consensus (RANSAC) [10], the 3D Hough transform performed faster and was more stable. During the maxima extraction process from the Hough space, this method applied a sliding window technique with a pre-computed Gaussian kernel. When dense noise exists surrounding a line, more than one peak is extracted in a connected segmented region using this method. In order to maintain stable line estimation, this paper applied a CCL algorithm to preserve only one peak extracted in one distinguishable region [11]. To localise and recognise a user's motion, Kinect is a popular display device in VR development. It is able to report on the user's localisation and gesture information. However, a single Kinect can only capture the front-side of users facing the sensor. To sense the back-side, Chen et al. [12] utilised multiple Kinects to reconstruct an entire 3D mesh of the segmented foreground human voxels with colour information. To track people in unconstrained environments, Sun et al. [13] proposed a pairwise skeleton matching scheme using the sensing results from multiple Kinects. Using a Kalman filter, their skeleton joints were calibrated and tracked across consecutive frames. Using this method, we found that different Kinects provided different localisation of joints because the sensed surfaces were not the same from different viewpoints. To acquire accurate datasets from multiple sensors, Chua et al. [14] addressed a sensor selection problem in a smart-house using a naïve Bayes classifier, a decision tree and k-Nearest-Neighbour algorithms. Sevrin et al. [15] proposed a people localisation system with a multiple Kinects trajectory fusion algorithm. The system adaptively selected the best possible choice among the Kinects in order to detect people with a highly accurate rate [16]. Following these sensor selection methods, we developed a wireless and reliable sensor network for VR applications to enable users to walk and interact freely with virtual objects. A 3D localisation system This section describes an indoor 3D localisation system for VR applications. A Hough transform algorithm is applied to detect the indoor boundary walls. A multiple Kinects selection method is proposed to localise a user's position with an omnidirectional orientation. Indoor boundary detection from 3D point clouds To estimate the localisation of indoor walls, we describe a framework of plane detection in 3D point clouds, as shown in Fig. 1. The framework mainly includes the registration of 3D point clouds, a height histogram of 3D points, non-ground points segmentation and planar surface detection. A framework for planar detection using 3D point clouds An indoor environment always contains six large planes, including four surrounding walls, the floor and the roof. This project aims to segment the non-ground walls from the detected planes to estimate the environmental size. A height histogram, as shown in Fig. 2, is first utilised to estimate the voxel distribution of the height [14]. Since the points located on the floor or roof surfaces always have the same height value, the two peaks of the height histogram are considered to be the floor and roof surfaces. After the peaks are filtered out, the non-ground points are then segmented. The proposed height histogram In indoor environments, the planes of boundary walls always form a cuboid shape. Since most LiDAR points are projected onto the walls, the mapped 2D points on the x–z plane from the wall points are combined into four straight lines. The pairwise opposite lines are parallel to each other and the neighbour lines are orthogonal to each other. For indoor boundary detection, a Hough transform algorithm is applied to estimate the parameters of the mapped lines on x–z plane from the segmented non-ground voxels. A flowchart of the applied Hough Transform is shown in Fig. 3. A flowchart of the applied Hough Transform We assume that the walls are always orthogonal to the x–z plane. Hence, the wall plane is formulated using the following linear Eq. (1): $$r = x\cos \alpha + z\sin \alpha$$ As shown in Fig. 4a, r is the distance from the origin to the straight line and α is the angle between the vertical direction of the line with the x axis. The Hough space is defined as the (r–α) plane calculated from a set of LiDAR points in x and z coordinates. The approximate sinusoidal curve in Fig. 4b represents the Hough space of a 2D point. As shown in Fig. 4c, all sinusoidal curves computed using the Hough transform from the points in a straight line cross at several points. The r and α coordinates of the maxima in the Hough space are the line parameters. An illustration of the Hough Transform. a Line parameters r and α. b The r–α plot of a 2D point. c The r–α plot of a line. d The r–α plot of all x–z coordinates The wall planes contain most of the points that form several straight lines on the x–z plane. Therefore, the four peaks of the Hough space are recognised as the parameters of the boundary wall planes after the (r, α) coordinates are generated from all the sensed indoor points using the Hough Transform. Each (r, α) cell in the Hough space records the count of the mapped LiDAR points; these indicate the occurrence frequency. The four peaks always exist in the intensive areas as shown in Fig. 4d. Figure 5a presents an instance of the occurrence frequency in the Hough space. To segment the intensive areas, the low frequency cells are filtered out using a threshold based on the occurrence frequency distribution of the cells. The valid cells are segmented as shown in Fig. 5b, and are classified into several distinguishable blocks using the CCL algorithm. In the CCL algorithm, the label of each cell is initialised corresponding to its index, as shown in Fig. 5c. To mark each distinguishable block with a unique label, the minimum label in Fig. 5d is searched for among a clique of each cell that contains the local, right and bottom cells. The clique updates the labels with the minimum label in it. Several seeking iterations of the minimum labels are implemented until all labels remain unchanged. The minimum label in a distinguishable block in Fig. 5e is the indicator of the connected valid cells. Finally, the corresponding (r, α) coordinate of the largest value in each distinguishable block of Fig. 5f is the required straight-line parameter. The process of the CCL algorithm. a The counts of the occurrence frequency in the Hough space. b The valid cells segmented using a threshold. c The labels initialised corresponding to the cell indices. d The process of finding the minimum labels among each clique. e The minimum labelling result. f Peak extraction of each distinguishable cluster Adaptive Kinect selection We propose a wireless sensor network to localise the VR user using the integration of multiple Kinects. As shown in Fig. 6, the user's motion and position datasets are detected from multiple views using the Kinects. The distributed Kinects report the sensed datasets to a VR server via a WiFi network. An adaptive Kinect is selected using a bivariate Gaussian PDF. The proposed 3D localisation method using multiple Kinects A Kinect is installed at each client to detect the user's gesture information from different viewpoints. From several gathered datasets, the effectiveness of each sensor is generated based on the user's distance d i and orientation θ i to the Kinect k i . If the distance is close and the orientation of the user is facing towards a sensor, the effectiveness of this sensor is then high. To select the best sensor, we apply a bivariate Gaussian PDF for the effectiveness estimation, formulated as follows: $$f_{k_{i}} (d_{i} ,\theta_{i} ) = \frac{{\exp \left[ { - \frac{{\left(\frac{{d_{i} - d_{0} }}{{\sigma_{1} }}\right)^{2} - \frac{{2\rho \left(d_{i} - d_{0} \right)\left(\theta_{i} - \theta_{0} \right)}}{{\sigma_{1} \sigma_{2} }} + \left(\frac{{\theta_{i} - \theta_{0} }}{{\sigma_{2} }}\right)^{2} }}{{2\left(1 - \rho^{2} \right)}}} \right]}}{{2\pi \sigma_{1} \sigma_{2} \sqrt {1 - \rho^{2} } }} .$$ Here, the variables d∈[0 ~ ∞), θ∈[− π~π), σ 1 = 1, σ 2 = 1, ρ∈[− 1, 0]. The adaptive Kinect is selected using a maximum likelihood function expressed as follows: $$k = \mathop {\arg \hbox{max} }\limits_{{k_{i} }} f_{{k_{i} }} (d_{i} ,\theta_{i} ) .$$ In this section, we analyse the performance of the proposed indoor boundary walls detection method from LiDAR points and illustrate a VR application developed using the proposed 3D localisation method. The experiments were implemented using one HDL-32E Velodyne LiDAR and two Microsoft Kinect2 sensors. The wall detection method was executed on a 3.20 GHz Intel® Core™ Quad CPU computer with a GeForce GT 770 graphics card and 4 GB of RAM. The Kinects were utilised to detect a user's gesture on two clients; these were 3.1 GHz Intel® Core™ i7-5557U CPU NUC mini PCs with 16 GB of RAM. The VR client was implemented on a Samsung Gear VR with a Samsung Galaxy Note 4 in it. The Note 4 had a 2.7 GHz Qualcomm Snapdragon Quad CPU, 3 GB of RAM, a 2560 × 1440 pixels resolution and the Android 4.4 operating system. The applied HDL-32E was able to sense 32 × 12 3D points in a packet per 552.96 μs. The field of view was 41.34° in the vertical direction and 360° in the horizontal direction with an angular resolution of 1.33°. The valid range was 70 m with an error variance of 2 cm. In our project, the 3D point clouds were reconstructed using DirectX software development kits. Figure 7a presents the raw datasets of 180 × 32 × 12 points sensed by a stationary Velodyne LiDAR in an indoor environment. By projecting the non-ground points onto the x–z plane, a density diagram was generated as shown in Fig. 7b where mapped cells with a high density are represented using red. The intensive regions of line shapes were considered to be the boundary walls. A 3D representation scene of the LiDAR datasets. a The 180 raw datasets of the 3D point cloud. b The x–z coordinates projected from non-ground points Using the proposed Hough Transform, the Hough space shown in Fig. 8a was generated from the non-ground points. The brightness of a cell in the Hough space indicates the occupied frequency of the (r, α) coordinates. In our experiment, the range of the distance r was calculated to be between − 10.598 and 5.909 m and the inclination angle α was between 0° and 180°. The system allocated an 825 × 360 integer buffer for the Hough space cache. Using a threshold computed based on the value distribution of the Hough space, the intensive regions were segmented as shown in Fig. 8b. After the proposed CCL algorithm was implemented using 19 iterations, 55 distinguishable blocks were grouped using different colours as shown in Fig. 9c. By selecting the four largest peaks from the distinguishable blocks, the corresponding coordinates (r, α) were calculated using the parameters of the straight lines. In Fig. 9d, we displayed the detected boundary walls with the LiDAR points. The estimated wall planes are located on the wall voxels, thus proving that our proposed method was accurate. The experimental results of indoor boundary detection from 3D point clouds. a The Hough space generated from the projected x–z coordinates using a Hough Transform. b The intensive areas filtered using a threshold. c The distinguishable blocks grouped using the CCL algorithm. d A representation of the detected boundary walls from the LiDAR points A VR boxing game developed using the proposed wireless multiple Kinect sensors selection system The range of the indoor environment was estimated to be 9.94 m in length and 7.54 m in width. The virtual environment was resized to correspond to the real environment so as to achieve virtual–physical synchronisation. The wall detection method was implemented during an initialisation step before the VR application was started. Using the proposed system, we developed a VR boxing game as shown in Fig. 9. In the system, the user's location and orientation were detected by two Kinects. When the player was facing a Kinect with a distance between 2 and 6 m, the motion information was sensed precisely. Through the experiments, we found that d 0 = 5.0 and θ 0 = 0.0 is the perfect position for Kinect detection. Through the selection of an effective Kinect, the user was able to make free movements and interact with the virtual boxer from an omnidirectional orientation. Meanwhile, the monitor of the server rendered the game visualisation result synchronously with the VR display. The processing speed of our application including data sensing, transmission and visualisation was greater than 35 fps; this successfully achieved the real-time requirements. To provide a free movement environment for VR applications, this paper demonstrated a 3D localisation method for virtual–physical synchronisation. For environmental detection, we utilised a HDL-32E Velodyne LiDAR sensor to detect the surrounding 3D point clouds. Using the Hough transform, a plane detection algorithm was proposed to extract indoor walls from point clouds so as to estimate the distance range of the surrounding environment. The virtual environment was then correspondingly resized. To match the user's position between real and virtual worlds, a wireless Kinects network was proposed for omnidirectional detection of the user's localisation. In the sensor selection process, we applied a Bivariate Gaussian PDF and the Maximum Likelihood Estimation method to select an adaptive Kinect. In the future, we will integrate touch sensors to the system for virtual–physical collaboration. Dick A, Torr P, Cipolla R (2004) Automatic 3d modeling of architecture, In: Proc. 11th British Machine Vision Conf. pp 372–381 Mukhopadhyay P, Chaudhuri B (2015) A survey of hough transform. Pattern Recognit 48(3):993–1010 Ales P, Oldrich V, Martin V et al (2015) Use of the image and depth sensors of the Microsoft Kinect for the detection of gait disorders. Neural Comput Appl 26(7):1621–1629 Mohammed A, Ahmed S (2015) Kinect-based humanoid robotic manipulator for human upper limbs movements tracking. Intell Control Autom 6(1):29–37 Song W, Sun G, Fong S et al (2016) A real-time infrared LED detection method for input signal positioning of interactive media. J Converg 7:1–6 Junho A, Richard H (2015) An indoor augmented-reality evacuation system for the Smartphone using personalized Pedometry. Hum Centric Comput Inf Sci 2:18 Zucchelli M, Santos-Victor J, Christensen HI (2002) Multiple plane segmentation using optical flow. In: Proc. 13th British Machine Vision Conf. pp 313–322 Trucco E, Isgro F, Bracchi F (2003) Plane detection in disparity space. In: Proc. IEE Int. Conf. Visual Information Engineering. pp 73–76 Hulik R, Spanel M, Smrz P, Materna Z (2014) Continuous plane detection in point-cloud data based on 3D hough transform. J Vis Commun Image R 25(1):86–97 Schnabel R, Wahl R, Klein R (2007) Efficient RANSAC for point-cloud shape detection. Comput Graph Forum 26(2):214–226 Song W, Tian Y, Fong S, Cho K, Wang W, Zhang W (2016) GPU-accelerated foreground segmentation and labeling for real-time video surveillance. Sustainability 8(10):916–936 Chen Y, Dang G, Chen Z et al (2014) Fast capture of personalized avatar using two Kinects. J Manuf Syst 33(1):233–240 Sun S, Kuo C, Chang P (2016) People tracking in an environment with multiple depth cameras: a skeleton-based pairwise trajectory matching scheme. J Vis Commun Image R 35:36–54 Chua SL, Foo LK (2015) Sensor selection in smart homes. Procedia Comput Sci 69:116–124 Sevrin L, Noury N, Abouchi N et al (2015) Preliminary results on algorithms for multi-kinect trajectory fusion in a living lab. IRBM 36:361–366 Erkan B, Nadia K, Adrian FC (2015) Augmented reality applications for cultural heritage using Kinect. Hum Centric Comput Inf Sci 5(20):1–8 Li M, Song W, Song L, Huang K, Xi Y, Cho K (2016) A wireless kinect sensor network system for virtual reality applications. Lect Notes Electr Eng 421:61–65 WS and LL described the proposed algorithms and wrote the whole manuscript. YT and GS implemented the experiments. SF and KC revised the manuscript. All authors read and approved the final manuscript. This research was supported by the National Natural Science Foundation of China (61503005), and by NCUT XN024-95. This paper is a revised version of a paper entitled 'A Wireless Kinect Sensor Network System for Virtual Reality Applications' presented in 2016 at Advances in Computer Science and Ubiquitous Computing-CSA-CUTE2016, Bangkok, Thailand [17]. Wei Song, Liying Liu, Yifei Tian, Guodong Sun, Simon Fong, Kyungeun Cho contributed equally to this work School of Computer Science, North China University of Technology, Beijing, China Wei Song, Liying Liu & Yifei Tian Department Digital Media Technology, Beijing University of Technology, Beijing, China Guodong Sun Department Computer and Information Science, University of Macau, Macau, China Simon Fong Department Multimedia Engineering, Dongguk University, Seoul, South Korea Kyungeun Cho Wei Song Liying Liu Yifei Tian Correspondence to Wei Song. Song, W., Liu, L., Tian, Y. et al. A 3D localisation method in indoor environments for virtual reality applications. Hum. Cent. Comput. Inf. Sci. 7, 39 (2017). https://doi.org/10.1186/s13673-017-0120-7 Hough transform Connected-component-labelling
CommonCrawl
\begin{definition}[Definition:Multiplication of Cuts] Let $0^*$ denote the rational cut associated with the (rational) number $0$. Let $\alpha$ and $\beta$ be cuts. The operation of '''multiplication''' is defined on $\alpha$ and $\beta$ as: :$\alpha \beta := \begin {cases} \size \alpha \, \size \beta & : \alpha \ge 0^*, \beta \ge 0^* \\ -\paren {\size \alpha \, \size \beta} & : \alpha < 0^*, \beta \ge 0^* \\ -\paren {\size \alpha \, \size \beta} & : \alpha \ge 0^*, \beta < 0^* \\ \size \alpha \, \size \beta & : \alpha < 0^*, \beta < 0^* \end {cases}$ where: :$\size \alpha$ denotes the absolute value of $\alpha$ :$\size \alpha \, \size \beta$ is defined as in Multiplication of Positive Cuts :$\ge$ denotes the ordering on cuts. In this context, $\alpha \beta$ is known as the '''product of $\alpha$ and $\beta$'''. \end{definition}
ProofWiki
\begin{document} \title{On Nash-solvability of finite $n$-person shortest path games; bi-shortest path conjecture } \begin{abstract} We formulate a conjecture from graph theory that is equivalent to Nash-solvability of the finite two-person shortest path games with positive local costs. For the three-person games such conjecture fails. \newline {\bf Keywords}: shortest path games, Nash equilibrium, Nash-solvability, cost function, payoff function, total effective cost, digraph, directed cycle. \newline {\bf MSC subject classification} 91A05, 91A06, 91A15, 91A18. \end{abstract} \section{Bi-shortest path conjecture} \label{s1} Let $G = (V,E)$ be a finite directed graph (digraph) with two distinct vertices $s, t \in V$. We assume that \begin{itemize} \item[(j)] every vertex $v \in V \setminus \{t\}$ has an outgoing edge, while $t$ has not; \item[(jj)] $G$ contains a directed path from $s$ to $t$; \item[(jjj)] every edge $e \in E$ belongs to such a path. \end{itemize} If (j) fails for $v$ we merge $v$ and $t$; if (jjj) fails for $e$ we delete $e$ from $E$. Given a partition $V \setminus \{t\} = V_1 \cup V_2$ with non-empty $V_1$ and $V_2$, assign an ordered pair of positive real numbers $(r_1(e), r_2(e))$ to every $e \in E$. Fix $ i \in \{ 1, 2\}$ and a mapping $s_i$ that assigns to each $v \in V_i$ an edge $e \in E$ going from $v$. Delete all other edges going from $v$. In the obtained digraph find a directed shortest path (SP) from $s$ to $t$, assuming that $r_{3-i}(e)$ are the lengths of the edges $e \in E$. (One can use, for example, Dijkstra's SP algorithm.) Doing so for $i = 1, 2$ and for every $s_i$ we obtain two sets of directed $(s,t)$-paths. We conjecture that these two sets intersect and call this statement the {\em Bi-SP conjecture}. Without loss of generality (WLOG) we can assume that all $(s,t)$-paths have pairwise different lengths. It may happen that some mappings $s_i$ leave no $(s,t)$-path. Then, we choose nothing. Let us slightly modify the procedure choosing in this case some symbolic path $c$. Then we obtain a weak versions of the Bi-SP conjecture. Indeed, if the obtained two sets of $(s,t)$-paths have only $c$ in common then the Bi-SP conjecture fails, but the weak Bi-SP one holds. WLOG, we can restrict ourselves by the bipartite graphs with parts $(V_1,V_2)$. Indeed, if $E$ contains an edge $e = (u,w)$ such that both $u,w \in V_i$, we subdivide $e$ by a vertex $v \in V_{3-i}$ into two edges $e' = (u,v)$ and $e'' = (v,w)$ choosing some lengths $r_i(e') > 0$ and $r_i(e'') > 0$ such that $r_i(e) = r_i(e') + r_i(e'')$ for $i = 1,2$. \section{Finite $n$-person shortest path games} \label{s2} \subsection*{Players, positions, moves, and local costs} Given a finite digraph $G =(V, E)$ satisfying assumption (j, jj, jjj) of Section~\ref{s1}, let us generalize case $n=2$ and consider an arbitrary integer $n \geq 2$. Partition vertices into $n$ non-empty subsets $V \setminus t = V_1 \cup \ldots \cup V_n$, assign an ordered $n$-tuple of positive real numbers $r(e) = (r_1(e), \ldots, r_n(e))$ to each $e \in E$, and consider the following interpretation: $I = \{1, \ldots, n\}$ is a set of {\em players}, $V_i$ the set of {\em positions} controlled by player $i \in I$; furthermore, $s = v_0$ and $t = v_t$ are respectively the {\em initial} and {\em terminal} positions; $e \in E$ the set of {\em legal moves}, and finally, $r_i(e)$ is the cost of move $e \in E$ for player $i \in I$, called the {\em local cost}. \subsection*{Strategies, plays, and effective costs} \label{ss2a} A mapping $s_i$ that assigns a move $(v, v')$ to each position $v \in V_i$ is a strategy of player $i \in I$. (We restrict ourselves and all players to their pure stationary strategies; no mixed or history dependent ones are considered in this paper.) Each {\em strategy profile} $s = (s_1, \ldots, s_n)$ uniquely defines a play $p(s)$, that is, a walk in $G$ that begins in the initial position $s = v_0$ and goes in accordance with $s$ in every position that appears. Obviously, $p(s)$ either terminates in $t = v_t$ or cycles; respectively, it is called a terminal or a cyclic play. Indeed, after $p(s)$ revisits a position, it will repeat its previous moves thus making a ``lasso". The effective cost of $p(s)$ for a player $i \in I$ is additive, that is, $$r_i(p(s)) = \sum_{e \in p(s)} r_i(e) \;\;\; \text{if $p(s)$ is a terminal play;}$$ $$r_i(p(s))= +\infty \;\;\; \text{if $p(s)$ is a cyclic play.}$$ In other word, each player $i \in I$ pays the local cost $r_i(e)$ for every move $e \in p(s)$. Since a cyclic play $p(s)$ never finishes and all local costs are positive, each player pays $+ \infty$. All players are minimizers. Thus, a finite $n$-{\em person SP game} is defined. We study Nash-solvability (NS) of these games. \section{Nash equilibrium and Nash-solvability} \label{s3} Recall that a {\em strategy profile} $s = (s_1, \dots, s_n)$ is called a {\em Nash equilibrium} (NE) if $r_i(s') \geq r_i(s)$ whenever $s'$ differs from $s$ only by the strategy of player $i$, that is, $s_j = s'_j$ for all $j \neq i$. In other words, no player $i \in I$ can make a profit by changing his/her strategy provided all other players keep their strategies unchanged. The Bi-SP conjecture means exactly that all finite two-person SP games (with positive local costs) are NS. Indeed, a pair of strategies $s = (s_1, s_2)$ realizes a bi-shortest path in $G$ if and only if $s$ is a NE in the corresponding two-person SP game. However, a three-person SP game, even with positive local costs, may be not NS; see \cite[Tables 2,3 and Figure 2]{GO14}. Digraph $G = (V,E)$ is called {\em bidirected} if each non-terminal move in it is reversible, that is, $(u,w) \in E$ if and only if $(w,u) \in E$ unless $u = t$ or $w = t$. We conjecture that every $n$-person SP game on a finite bidirected digraph is NS. \section{Essential properties of cost functions} \label{s4} \subsection*{$k$-total costs and rewards} SP games can be viewed as a very special class within the so-called finite deterministic stochastic games with perfect information with $k$-total effective reward \cite{BEGM17}. (Negated costs are called payoffs or rewards.) The limit mean payoff \cite{Gil57,LL69}, most common in the literature, and the total reward \cite{TV98,BEGM18}, correspond to $k=0$ and $k=1$, respectively \cite{BEGM17}. The family of $k$-total effective rewards is nested with respect to $k$, that is, $k$-total rewards can be properly embedded into $(k+1)$-total rewards \cite{BEGM17}. Mostly, the two-person zero-sum case is studied in the literature. Yet, all main concepts and definitions can be naturally extended to the $n$-person case; in particular, to the two-person but not necessarily zero-sum case. The obtained games may have no NE already for $n=2$ and $k=0$; see \cite{Gur88}. Since they are $k$-nested, NS may fail for any $n \geq 2$ and $k \geq 0$. Yet, NS becomes an open problem for $n = 2$ and $k = 1$, provided we require that all local rewards are negative, or in other words, that all local costs are positive \cite[Section 8]{BEGM17}. This is an alternative view at the Bi-SP conjecture. \subsection*{Positive costs and Gallai's Potential Transformation} The latter requirement: \begin{itemize} \item[(i)] $\;\;\; r_i(e) > 0$ for each player $i \in I$ and directed edge $e \in E$ \end{itemize} \noindent can be replaced by a seemingly weaker (but in fact, equivalent) one: \begin{itemize} \item[(ii)] $\;\;\; \sum_{e \in C} r_i(e) > 0$ for each player $i \in I$ and directed cycle $C$ in $G$. \end{itemize} Implication (i) $\Rightarrow$ (ii) is obvious. Conversely, if (ii) holds, one can enforce (i) applying the following potential transformation \cite{Gal58}. Choose an arbitrary mapping $x : V \rightarrow \mathbb{R}$ and replace $r_i(e)$ by $r'_i(e) = r_i(e) + x(v) - x(v')$ for every $i \in I$ and $e = (v,v') \in E$. Obviously, this transformation does not change the game, since $r'(P) - r(P) = x(s) - x(t) = const$ for every directed $(s,t)$-path $P$. Furthermore, $r'(C) = r(C)$ for every directed cycle $C$ in $G$, and for each $r$ satisfying (ii) there exists a potential $x$ such that (i) holds for $r'$ \cite{Gal58}. \section{Subgame perfect NE-free shortest path games} \label{s5} NE $s = (s_1, \ldots, s_n)$ in a finite $n$-person SP game is called {\em uniform} if it is a NE with respect to every initial position $s = v_0 \in V \setminus t$. In the literature uniform NE (UNE) are frequently referred to as {\em subgame perfect NE}. By definition, any UNE is a NE, but not vice versa. A large family of $n$-person UNE-free games can be found in \cite[Section 3.3]{GN21A} for $n > 2$, and even for $n=2$ in \cite[the last examples in Figures 1 and 3]{BEGM12}. All these games have terminal payoffs, which is a special case the additive one. Hence, these games can be viewed as as a special case of the SP games. Every NE-free game contains a UNE-free subgame \cite[Remark 3]{BGMOV18}. Indeed, consider an arbitrary finite $n$-persoon NE-free SP game $\Gamma$ and eliminate the initial position $s = v_0$ from its graph $G$. The obtained subgame $\Gamma'$ is UNE-free. Indeed, assume for contradiction that $\Gamma'$ has a UNE $s = (s_1, \ldots, s_n)$. Then, $\Gamma$ would also have a NE, which can be obtained by backward induction. The player beginning in $s = v_0$ chooses a move that maximizes his/her reward, assuming that $s$ is played in $\Gamma'$ by all players. Clearly, $s$ extended by this move forms a NE in $\Gamma$, which is a contradiction. Thus, searching for a NE-free SP games one should begin with a UNE-free SP game then trying to extend it with an acyclic prefix. This was successfully realized in \cite{GO14,BGMOV18} for $n=3$. However, for $n = 2$ all such tries failed. \end{document}
arXiv
Casey's theorem In mathematics, Casey's theorem, also known as the generalized Ptolemy's theorem, is a theorem in Euclidean geometry named after the Irish mathematician John Casey. Formulation of the theorem Let $\,O$ be a circle of radius $\,R$. Let $\,O_{1},O_{2},O_{3},O_{4}$ be (in that order) four non-intersecting circles that lie inside $\,O$ and tangent to it. Denote by $\,t_{ij}$ the length of the exterior common bitangent of the circles $\,O_{i},O_{j}$. Then:[1] $\,t_{12}\cdot t_{34}+t_{14}\cdot t_{23}=t_{13}\cdot t_{24}.$ Note that in the degenerate case, where all four circles reduce to points, this is exactly Ptolemy's theorem. Proof The following proof is attributable[2] to Zacharias.[3] Denote the radius of circle $\,O_{i}$ by $\,R_{i}$ and its tangency point with the circle $\,O$ by $\,K_{i}$. We will use the notation $\,O,O_{i}$ for the centers of the circles. Note that from Pythagorean theorem, $\,t_{ij}^{2}={\overline {O_{i}O_{j}}}^{2}-(R_{i}-R_{j})^{2}.$ We will try to express this length in terms of the points $\,K_{i},K_{j}$. By the law of cosines in triangle $\,O_{i}OO_{j}$, ${\overline {O_{i}O_{j}}}^{2}={\overline {OO_{i}}}^{2}+{\overline {OO_{j}}}^{2}-2{\overline {OO_{i}}}\cdot {\overline {OO_{j}}}\cdot \cos \angle O_{i}OO_{j}$ Since the circles $\,O,O_{i}$ tangent to each other: ${\overline {OO_{i}}}=R-R_{i},\,\angle O_{i}OO_{j}=\angle K_{i}OK_{j}$ Let $\,C$ be a point on the circle $\,O$. According to the law of sines in triangle $\,K_{i}CK_{j}$: ${\overline {K_{i}K_{j}}}=2R\cdot \sin \angle K_{i}CK_{j}=2R\cdot \sin {\frac {\angle K_{i}OK_{j}}{2}}$ Therefore, $\cos \angle K_{i}OK_{j}=1-2\sin ^{2}{\frac {\angle K_{i}OK_{j}}{2}}=1-2\cdot \left({\frac {\overline {K_{i}K_{j}}}{2R}}\right)^{2}=1-{\frac {{\overline {K_{i}K_{j}}}^{2}}{2R^{2}}}$ and substituting these in the formula above: ${\overline {O_{i}O_{j}}}^{2}=(R-R_{i})^{2}+(R-R_{j})^{2}-2(R-R_{i})(R-R_{j})\left(1-{\frac {{\overline {K_{i}K_{j}}}^{2}}{2R^{2}}}\right)$ ${\overline {O_{i}O_{j}}}^{2}=(R-R_{i})^{2}+(R-R_{j})^{2}-2(R-R_{i})(R-R_{j})+(R-R_{i})(R-R_{j})\cdot {\frac {{\overline {K_{i}K_{j}}}^{2}}{R^{2}}}$ ${\overline {O_{i}O_{j}}}^{2}=((R-R_{i})-(R-R_{j}))^{2}+(R-R_{i})(R-R_{j})\cdot {\frac {{\overline {K_{i}K_{j}}}^{2}}{R^{2}}}$ And finally, the length we seek is $t_{ij}={\sqrt {{\overline {O_{i}O_{j}}}^{2}-(R_{i}-R_{j})^{2}}}={\frac {{\sqrt {R-R_{i}}}\cdot {\sqrt {R-R_{j}}}\cdot {\overline {K_{i}K_{j}}}}{R}}$ We can now evaluate the left hand side, with the help of the original Ptolemy's theorem applied to the inscribed quadrilateral $\,K_{1}K_{2}K_{3}K_{4}$: ${\begin{aligned}&t_{12}t_{34}+t_{14}t_{23}\\[4pt]={}&{\frac {1}{R^{2}}}\cdot {\sqrt {R-R_{1}}}{\sqrt {R-R_{2}}}{\sqrt {R-R_{3}}}{\sqrt {R-R_{4}}}\left({\overline {K_{1}K_{2}}}\cdot {\overline {K_{3}K_{4}}}+{\overline {K_{1}K_{4}}}\cdot {\overline {K_{2}K_{3}}}\right)\\[4pt]={}&{\frac {1}{R^{2}}}\cdot {\sqrt {R-R_{1}}}{\sqrt {R-R_{2}}}{\sqrt {R-R_{3}}}{\sqrt {R-R_{4}}}\left({\overline {K_{1}K_{3}}}\cdot {\overline {K_{2}K_{4}}}\right)\\[4pt]={}&t_{13}t_{24}\end{aligned}}$ Further generalizations It can be seen that the four circles need not lie inside the big circle. In fact, they may be tangent to it from the outside as well. In that case, the following change should be made:[4] If $\,O_{i},O_{j}$ are both tangent from the same side of $\,O$ (both in or both out), $\,t_{ij}$ is the length of the exterior common tangent. If $\,O_{i},O_{j}$ are tangent from different sides of $\,O$ (one in and one out), $\,t_{ij}$ is the length of the interior common tangent. The converse of Casey's theorem is also true.[4] That is, if equality holds, the circles are tangent to a common circle. Applications Casey's theorem and its converse can be used to prove a variety of statements in Euclidean geometry. For example, the shortest known proof[1]: 411  of Feuerbach's theorem uses the converse theorem. References 1. Casey, J. (1866). "On the Equations and Properties: (1) of the System of Circles Touching Three Circles in a Plane; (2) of the System of Spheres Touching Four Spheres in Space; (3) of the System of Circles Touching Three Circles on a Sphere; (4) of the System of Conics Inscribed to a Conic, and Touching Three Inscribed Conics in a Plane". Proceedings of the Royal Irish Academy. 9: 396–423. JSTOR 20488927. 2. Bottema, O. (1944). Hoofdstukken uit de Elementaire Meetkunde. (translation by Reinie Erné as Topics in Elementary Geometry, Springer 2008, of the second extended edition published by Epsilon-Uitgaven 1987). 3. Zacharias, M. (1942). "Der Caseysche Satz". Jahresbericht der Deutschen Mathematiker-Vereinigung. 52: 79–89. 4. Johnson, Roger A. (1929). Modern Geometry. Houghton Mifflin, Boston (republished facsimile by Dover 1960, 2007 as Advanced Euclidean Geometry). External links Wikimedia Commons has media related to Casey's theorem. • Weisstein, Eric W. "Casey's theorem". MathWorld. • Shailesh Shirali: "'On a generalized Ptolemy Theorem'". In: Crux Mathematicorum, Vol. 22, No. 2, pp. 49-53
Wikipedia
Mott polynomials In mathematics the Mott polynomials sn(x) are polynomials introduced by N. F. Mott (1932, p. 442) who applied them to a problem in the theory of electrons. They are given by the exponential generating function $e^{x({\sqrt {1-t^{2}}}-1)/t}=\sum _{n}s_{n}(x)t^{n}/n!.$ Because the factor in the exponential has the power series ${\frac {{\sqrt {1-t^{2}}}-1}{t}}=-\sum _{k\geq 0}C_{k}\left({\frac {t}{2}}\right)^{2k+1}$ in terms of Catalan numbers $C_{k}$, the coefficient in front of $x^{k}$ of the polynomial can be written as $[x^{k}]s_{n}(x)=(-1)^{k}{\frac {n!}{k!2^{n}}}\sum _{n=l_{1}+l_{2}+\cdots +l_{k}}C_{(l_{1}-1)/2}C_{(l_{2}-1)/2}\cdots C_{(l_{k}-1)/2}$, according to the general formula for generalized Appell polynomials, where the sum is over all compositions $n=l_{1}+l_{2}+\cdots +l_{k}$ of $n$ into $k$ positive odd integers. The empty product appearing for $k=n=0$ equals 1. Special values, where all contributing Catalan numbers equal 1, are $[x^{n}]s_{n}(x)={\frac {(-1)^{n}}{2^{n}}}.$ $[x^{n-2}]s_{n}(x)={\frac {(-1)^{n}n(n-1)(n-2)}{2^{n}}}.$ By differentiation the recurrence for the first derivative becomes $s'(x)=-\sum _{k=0}^{\lfloor (n-1)/2\rfloor }{\frac {n!}{(n-1-2k)!2^{2k+1}}}C_{k}s_{n-1-2k}(x).$ The first few of them are (sequence A137378 in the OEIS) $s_{0}(x)=1;$ $s_{1}(x)=-{\frac {1}{2}}x;$ $s_{2}(x)={\frac {1}{4}}x^{2};$ $s_{3}(x)=-{\frac {3}{4}}x-{\frac {1}{8}}x^{3};$ $s_{4}(x)={\frac {3}{2}}x^{2}+{\frac {1}{16}}x^{4};$ $s_{5}(x)=-{\frac {15}{2}}x-{\frac {15}{8}}x^{3}-{\frac {1}{32}}x^{5};$ $s_{6}(x)={\frac {225}{8}}x^{2}+{\frac {15}{8}}x^{4}+{\frac {1}{64}}x^{6};$ The polynomials sn(x) form the associated Sheffer sequence for –2t/(1–t2) (Roman 1984, p.130). Arthur Erdélyi, Wilhelm Magnus, and Fritz Oberhettinger et al. (1955, p. 251) give an explicit expression for them in terms of the generalized hypergeometric function 3F0: $s_{n}(x)=(-x/2)^{n}{}_{3}F_{0}(-n,{\frac {1-n}{2}},1-{\frac {n}{2}};;-{\frac {4}{x^{2}}})$ References • Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz; Tricomi, Francesco G. (1955), Higher transcendental functions. Vol. III, McGraw-Hill Book Company, Inc., New York-Toronto-London, MR 0066496 • Mott, N. F. (1932), "The Polarisation of Electrons by Double Scattering", Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, 135 (827): 429–458, doi:10.1098/rspa.1932.0044, ISSN 0950-1207, JSTOR 95868 • Roman, Steven (1984), The umbral calculus, Pure and Applied Mathematics, vol. 111, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-594380-2, MR 0741185, Reprinted by Dover, 2005
Wikipedia
Multiple homoclinic solutions for p-Laplacian Hamiltonian systems with concave–convex nonlinearities Lili Wan1 Boundary Value Problems volume 2020, Article number: 4 (2020) Cite this article The multiplicity of homoclinic solutions is obtained for a class of the p-Laplacian Hamiltonian systems \(\frac{d}{dt}(|\dot{u}(t)|^{p-2}\dot{u}(t))-a(t)|u(t)|^{p-2}u(t)+ \nabla W(t,u(t))=0\) via variational methods, where \(a(t)\) is neither coercive nor bounded necessarily and \(W(t,u)\) is under new concave–convex conditions. Recent results in the literature are generalized even for \(p=2\). Let us consider the p-Laplacian Hamiltonian systems $$ \frac{d}{dt} \bigl( \bigl\vert \dot{u}(t) \bigr\vert ^{p-2}\dot{u}(t) \bigr)-a(t) \bigl\vert u(t) \bigr\vert ^{p-2}u(t)+ \nabla W \bigl(t,u(t) \bigr)=0, $$ where \(t\in \mathbb{R}\), \(u\in \mathbb{R}^{N}\), \(p>1\), \(a\in C( \mathbb{R}, [a_{0},+\infty ))\) with \(a_{0}>0\) and \(W\in C^{1}( \mathbb{R}\times \mathbb{R}^{N}, \mathbb{R})\). As usual, we say that u is a nontrivial homoclinic solution (to 0) if \(u\not \equiv 0\), \(u(t)\) and \(\dot{u}(t)\to 0\) as \(|t|\to +\infty \). If \(p\equiv 2\) and \(a(t)=L(t)\), (1) reduces to the second order Hamiltonian system $$\begin{aligned} \ddot{u}(t)-L(t)u(t)+\nabla W \bigl(t,u(t) \bigr)=0, \end{aligned}$$ where \(L\in C(\mathbb{R}, \mathbb{R}^{N^{2}})\) is a symmetric and positive definite matrix for all \(t\in \mathbb{R}\). In the last 30 years, the existence and multiplicity of solutions for Hamiltonian systems or other differential systems have been investigated in many papers via variational methods (see [1–4, 9, 11, 14–18, 23]). It is well-known that homoclinic orbits play an important role in analyzing the chaos of dynamical systems. Since the problem is considered on the whole space, one of the difficulties to find the solutions of Hamiltonian systems is the lack of compactness of the Sobolev embedding. To overcome this difficulty, \(L(t)\) and \(W(t,x)\) were assumed to be periodic in t. Without periodicity, Rabinowitz and Tanaka [9] introduced the following coercive condition: (L): there exists a continuous function \(\alpha :\mathbb{R} \to \mathbb{R}^{+}\) satisfying $$\begin{aligned} \bigl(L(t)x,x \bigr)\geq \alpha (t) \vert x \vert ^{2}\quad \text{and}\quad \alpha (t)\to +\infty \quad \text{as } \vert t \vert \to +\infty . \end{aligned}$$ The operator \(\frac{d}{dt}(|\dot{u}(t)|^{p-2}\dot{u}(t))\) in (1) is said to be p-Laplacian. In the last decade there has been an increasing interest in the study of ordinary differential systems driven by the p-Laplacian. The existence and multiplicity of homoclinic orbits for the p-Laplacian Hamiltonian system were studied in recent papers [5–7, 10, 12, 13, 19, 20, 22] and the references therein. Similarly, to overcome the lack of compactness of the Sobolev embedding, the following coercive assumption on a was assumed in [5]: (A): a is a positive continuous function such that $$\begin{aligned} a(t)\to +\infty \quad \text{as } \vert t \vert \to +\infty . \end{aligned}$$ It is clear that the coercive conditions are much restrictive. In a recent paper, Zhang et al. [22] proved the existence of two nontrivial homoclinic solutions of problem (1) without coercive conditions. They assumed that a is bounded, that is, (\(A'\)): there are two constants \(\tau _{1}\) and \(\tau _{2}\) such that $$\begin{aligned} 0< \tau _{1}\leq a(t)\leq \tau _{2}< +\infty \quad \text{for all } t\in \mathbb{R}. \end{aligned}$$ Besides, they considered the concave–convex nonlinearity, which is of the form $$\begin{aligned} W(t,x)=W_{1}(t,x)+W_{2}(t,x), \end{aligned}$$ where \(W_{1}\) is of super-p growth at infinity and \(W_{2}\) is of sub-p growth at infinity. Explicitly, the authors supposed the following conditions: (\(V_{1}\)): there exists a constant \(\vartheta >{p}\) such that $$\begin{aligned} 0< \vartheta W_{1}(t,x)\leq \bigl(\nabla W_{1}(t,x),x \bigr),\quad \forall (t,x) \in \mathbb{R}\times \mathbb{R}^{N} \setminus \{0\}; \end{aligned}$$ there exists a continuous function \(w:\mathbb{R} \to \mathbb{R}^{+}\) such that $$\begin{aligned} \lim_{|t|\to +\infty } w(t)=0 \end{aligned}$$ $$\begin{aligned} \bigl\vert \nabla W_{1}(t,x) \bigr\vert \leq w(t) \vert x \vert ^{\vartheta -1}\quad \text{for all } (t,x) \in \mathbb{R}\times \mathbb{R}^{N}; \end{aligned}$$ \(W_{2}(t,0)=0\) for all \(t\in \mathbb{R}\), \(W_{2} \in C^{1}(\mathbb{R}\times \mathbb{R}^{N}, \mathbb{R})\) and there exist a constant \(1<\varrho <2\) and a continuous function \(b:\mathbb{R} \to \mathbb{R}^{+}\) such that $$\begin{aligned} W_{2}(t,x)\geq b(t) \vert x \vert ^{\varrho } \end{aligned}$$ for all \((t,x)\in \mathbb{R}\times \mathbb{R}^{N}\); for all \(t\in \mathbb{R}\) and \(x\in \mathbb{R}^{N}\), $$\begin{aligned} \bigl\vert \nabla W_{2}(t,x) \bigr\vert \leq c(t) \vert u \vert ^{\varrho -1}, \end{aligned}$$ where \(c: \mathbb{R}\to \mathbb{R}^{+}\) is a continuous function such that \(c\in L^{\xi }(\mathbb{R},\mathbb{R})\) for some constant \(1\leq \xi \leq 2\); $$\begin{aligned} \biggl(\frac{p \Vert c \Vert _{\xi }C^{\varrho }_{\varrho \xi ^{*}}}{\varrho }\frac{ \vartheta -\varrho }{\vartheta -p} \biggr)^{\vartheta -p}< \biggl(\frac{ \vartheta }{p \Vert \omega \Vert _{\infty }C^{\vartheta }_{\vartheta }}\frac{p- \varrho }{\vartheta -\varrho } \biggr)^{\varrho -p}, \end{aligned}$$ where \(\xi ^{*}\) is the conjugate component of ξ. Obviously, we can deduce from the conditions \((V_{1})\) and \((V_{2})\) that (\(W_{0}\)): there exist constants \(c_{1}, c_{2}>0\) and \(\mu >p\) such that $$\begin{aligned} \bigl\vert \nabla W_{1}(t,x) \bigr\vert \leq c_{1} \vert x \vert ^{\mu -1}+c_{2}\quad \text{for all } (t,x) \in \mathbb{R}\times \mathbb{R}^{N}; \end{aligned}$$ \(\nabla W_{1}(t,x)=o(|x|^{{p}-1})\) as \(|x|\to 0\) uniformly in t; \(W_{1}(t,x)/|x|^{{p}}\to +\infty \) as \(|x|\to + \infty \) uniformly in t; there exists \(d_{1}>0\) such that \(W_{1}(t,x)\geq -d _{1}|x|^{{p}}\) for all \((t,x)\in \mathbb{R}\times \mathbb{R}^{N}\); there are constants \(\nu >p\) and \(\rho _{0}\), \(d_{2}>0\) such that $$\begin{aligned} \bigl(\nabla W_{1}(t,x),x \bigr)-\nu W_{1}(t,x)\geq -d_{2} \vert x \vert ^{p},\quad \forall t \in \mathbb{R}, \forall \vert x \vert \geq \rho _{0}. \end{aligned}$$ Motivated by the above facts, in this note, we try to drop both conditions \((A)\) and \((A')\) and consider the following conditions: (\(A_{1}\)): \(\int _{\mathbb{R}} a(t)^{-\frac{q}{p}}\,dt<+\infty \), where q is the conjugate component of p, that is, \(\frac{1}{p}+ \frac{1}{q}=1\); there exists a constant \(\lambda >q^{-1}\) such that $$\begin{aligned} \operatorname{meas} \bigl(t\in \mathbb{R}|\ \vert t \vert ^{-\lambda p}a(t)< M \bigr)< + \infty , \quad \forall M>0, \end{aligned}$$ where \(\text{meas}(\cdot )\) denotes the Lebesgue measure and q is the conjugate component of p. Using conditions (\(A_{1}\)) and (\(A_{2}\)) separately, we prove some new compact embedding theorems and discuss the multiplicity of homoclinic solutions for problem (1) with weaker combined nonlinearities. Now we state our main results. Suppose that\(W(t,x)=W_{1}(t,x)+W_{2}(t,x)\). Assume\((A_{1})\), (\(W _{0}\))–(\(W_{4}\)) and the following conditions hold: \(W_{2}(t,0)=0\)for all\(t\in \mathbb{R}\)and there exist a constant\(1<\theta <p\)and a continuous function\(b: \mathbb{R}\to \mathbb{R}^{+}\)such that $$\begin{aligned} W_{2}(t,x)\geq b(t) \vert x \vert ^{\theta } \end{aligned}$$ for all\((t,x)\in \mathbb{R}\times \mathbb{R}^{N}\); \(W_{2}\in C^{1}(\mathbb{R}\times \mathbb{R}^{N}, \mathbb{R})\)and there exists a continuous function\(c: \mathbb{R} \to \mathbb{R}^{+}\)such that $$\begin{aligned} \bigl\vert \nabla W_{2}(t,x) \bigr\vert \leq c(t) \vert x \vert ^{\theta -1}, \end{aligned}$$ where\(c\in L^{\zeta }( \mathbb{R}, \mathbb{R})\)for some constant\(\zeta >1\)and\(\|c\|_{\zeta }\)is small enough; \(\zeta ^{*}(\theta -1)\geq p\), where\(\zeta ^{*}\)is the conjugate component ofζ. Then problem (1) possesses at least two nontrivial homoclinic solutions. From Theorem 1, we see that the conditions related to the sup-p term \(W_{1}\) are weaker than that in [22]. There are functions satisfying the conditions (\(W_{0}\))–(\(W_{4}\)) but not (\(V_{1}\)) and (\(V_{2}\)). Moreover, we can also give some examples of a not satisfying the conditions (A) and (\(A'\)). For example, let $$ W_{1}(t,x)= \textstyle\begin{cases} - \vert x \vert ^{4}+ \vert x \vert ^{3}, & \vert x \vert \leq \frac{4}{5} \\ ( \vert x \vert -\frac{4+4^{\frac{1}{3}}}{5} )^{4}+\frac{64-4^{ \frac{4}{3}}}{625}, & \vert x \vert \geq \frac{4}{5}, \end{cases}\displaystyle \qquad W_{2}(t,x)= \frac{\epsilon }{ (1+t^{2} )^{\frac{3}{4}}}\frac{2 \vert x \vert ^{ \frac{3}{2}}}{3}, $$ $$\begin{aligned} a(t)= \textstyle\begin{cases} (n^{2}+1)^{2}( \vert t \vert -n)+c_{0}, & n\leq \vert t \vert < n+\frac{1}{n^{2}+1}, \\ (n^{2}+1)+c_{0}, & n+\frac{1}{n^{2}+1}\leq \vert t \vert < n+\frac{n^{2}}{n^{2}+1}, \\ (n^{2}+1)^{2}(n+1- \vert t \vert )+c_{0}, & n+\frac{n^{2}}{n^{2}+1}\leq \vert t \vert < n+1, \end{cases}\displaystyle \end{aligned}$$ where \(n\in \mathbb{N}\), \(c_{0}\in \mathbb{R}\). A straightforward computation shows that \(W_{1}\), \(W_{2}\) and a satisfy the assumptions of Theorem 1 with \(p=2\), \(\mu =5\), \(\theta =\frac{3}{2}\), \(\zeta = \frac{4}{3}\) and \(\epsilon >0\) small enough. By replacing the condition \((A_{1})\), we have the following theorem. Assume that\(W(t,x)=W_{1}(t,x)+W_{2}(t,x)\). Suppose that\((A_{2})\)and (\(W_{0}\))–(\(W_{7}\)) hold, then problem (1) possesses at least two nontrivial homoclinic solutions. There exist functions that satisfy the condition (\(A_{2}\)) but do not satisfy the conditions (A) and (\(A'\)), such as \(a(t)=t^{4}{\sin } ^{2}t+1\) with \(p=2\) and \(\lambda =1\). Thus Theorem 2 is different from the previous results. Proof of Theorem 1 First, we introduce the space in which we can construct the variational framework. Let $$\begin{aligned} E= \biggl\{ u\in W^{1,p} \bigl(\mathbb{R}, \mathbb{R}^{N} \bigr): \int _{\mathbb{R}} \bigl( \bigl\vert \dot{u}(t) \bigr\vert ^{p}+a(t) \bigl\vert u(t) \bigr\vert ^{p} \bigr) \,dt< +\infty \biggr\} \end{aligned}$$ with the norm $$\begin{aligned} \Vert u \Vert = \biggl( \int _{\mathbb{R}} \bigl( \bigl\vert \dot{u}(t) \bigr\vert ^{p}+a(t) \bigl\vert u(t) \bigr\vert ^{p} \bigr) \,dt \biggr) ^{\frac{1}{p}}. \end{aligned}$$ Then E is a uniform convex Banach space. Denote by \(L^{\gamma }( \mathbb{R}, \mathbb{R}^{N})\) (\(1\leq \gamma <+\infty \)) the Banach spaces of functions with the norms $$\begin{aligned} \Vert u \Vert _{\gamma }= \biggl( \int _{\mathbb{R}} \bigl\vert u(t) \bigr\vert ^{\gamma } \,dt \biggr) ^{\frac{1}{\gamma }}, \end{aligned}$$ and \(L^{\infty }(\mathbb{R}, \mathbb{R}^{N})\) is the Banach space of essentially bounded functions under the norm $$\begin{aligned} \Vert u \Vert _{\infty }=\operatorname{ess}\ \sup \bigl\{ \bigl\vert u(t) \bigr\vert : t\in \mathbb{R} \bigr\} . \end{aligned}$$ ([22]) The embedding\(E\hookrightarrow L^{\gamma }( \mathbb{R},\mathbb{R}^{N})\) (\(p\leq \gamma \leq +\infty \)) is continuous. Under the condition\((A_{1})\), the embedding\(E\hookrightarrow L^{1}( \mathbb{R},\mathbb{R}^{N})\)is continuous and compact. By \((A_{1})\) and Hölder's inequality, for all \(u\in E\) one has $$\begin{aligned} \int _{\mathbb{R}} \bigl\vert u(t) \bigr\vert \,dt =& \int _{\mathbb{R}}a(t)^{-\frac{1}{p}}a(t)^{ \frac{1}{p}} \bigl\vert u(t) \bigr\vert \,dt \\ \leq & \biggl( \int _{\mathbb{R}} a(t)^{-\frac{q}{p}}\,dt \biggr)^{ \frac{1}{q}} \biggl( \int _{\mathbb{R}} a(t) \bigl\vert u(t) \bigr\vert ^{p}\,dt \biggr)^{ \frac{1}{p}} \\ \leq & \biggl( \int _{\mathbb{R}} a(t)^{-\frac{q}{p}}\,dt \biggr)^{ \frac{1}{q}} \Vert u \Vert , \end{aligned}$$ which implies that the embedding is continuous. Let \(\{u_{n}\}\subset E\) be a sequence such that \(u_{n}\rightharpoonup 0\) in E. By Banach–Steinhaus Theorem, there exists \(M_{0}>0\) such that $$\begin{aligned} \sup_{n\in \mathbb{N}} \Vert u_{n} \Vert \leq M_{0}. \end{aligned}$$ Since the embedding is compact on bounded domain, it suffices to show that, for any \(\varepsilon >0\), there exists \(r>0\) such that $$\begin{aligned} \int _{ \vert t \vert > r} \bigl\vert u_{n}(t) \bigr\vert \,dt< \varepsilon . \end{aligned}$$ In fact, we have $$\begin{aligned} \int _{ \vert t \vert > r} \bigl\vert u_{n}(t) \bigr\vert \,dt \leq & \int _{ \vert t \vert >r}a(t)^{-\frac{1}{p}}a(t)^{ \frac{1}{p}} \bigl\vert u_{n}(t) \bigr\vert \,dt \\ \leq & \biggl( \int _{ \vert t \vert >r} a(t)^{-\frac{q}{p}}\,dt \biggr)^{\frac{1}{q}} \biggl( \int _{ \vert t \vert >r}a(t) \bigl\vert u_{n}(t) \bigr\vert ^{p} \,dt \biggr)^{\frac{1}{p}} \\ \leq & \biggl( \int _{ \vert t \vert >r} a(t)^{-\frac{q}{p}}\,dt \biggr)^{\frac{1}{q}} \Vert u_{n} \Vert \\ \leq & \biggl( \int _{ \vert t \vert >r} a(t)^{-\frac{q}{p}}\,dt \biggr)^{\frac{1}{q}}M _{0}. \end{aligned}$$ It follows from (\(A_{1}\)) that this can be made arbitrarily small by choosing r large. Hence, we get \(u_{n}\to 0\) in \(L^{1}(\mathbb{R}, \mathbb{R}^{N})\). □ From Lemma 1 and Lemma 2, for \(\gamma =1\) or \(p\leq \gamma \leq + \infty \), there exists \(C_{\gamma }>0\) such that $$\begin{aligned} \Vert u \Vert _{\gamma }\leq C_{\gamma } \Vert u \Vert , \quad \forall u\in E. \end{aligned}$$ Suppose that the conditions\((A_{1})\)and\((W_{1})\)hold, then we have\(\nabla W_{1}(t,u_{n})\to \nabla W_{1}(t,u)\)in\(L^{{q}}(\mathbb{R}, \mathbb{R}^{N})\)if\(u_{n}\rightharpoonup u\)inE. Assume that \(u_{n}\rightharpoonup u\) in E. By the Banach–Steinhaus theorem and (2), there exists \(M_{1}>0\) such that $$\begin{aligned} \sup_{n\in \mathbb{N}} \Vert u_{n} \Vert _{\infty }\leq M_{1} \quad \text{and}\quad \Vert u \Vert _{\infty }\leq M_{1}. \end{aligned}$$ We can deduce from \((W_{0})\), \((W_{1})\) and (3) that there exists \(M_{2}>0\) such that $$\begin{aligned} \bigl\vert \nabla W_{1}(t,u_{n}) \bigr\vert \leq M_{2} \bigl\vert u_{n}(t) \bigr\vert ^{{p}-1} \quad \text{and}\quad \bigl\vert \nabla W_{1}(t,u) \bigr\vert \leq M_{2} \bigl\vert u(t) \bigr\vert ^{{p}-1}, \end{aligned}$$ which implies that $$\begin{aligned} \bigl\vert \nabla W_{1}(t,u_{n})-\nabla W_{1}(t,u) \bigr\vert \leq & M_{2} \bigl( \bigl\vert u_{n}(t) \bigr\vert ^{ {p}-1}+ \bigl\vert u(t) \bigr\vert ^{{p}-1} \bigr) \\ \leq &M_{2} \bigl[2^{{p}-1} \bigl( \bigl\vert u_{n}(t)-u(t) \bigr\vert ^{{p}-1}+ \bigl\vert u(t) \bigr\vert ^{{p}-1} \bigr)+ \bigl\vert u(t) \bigr\vert ^{ {p}-1} \bigr] \\ \leq &M_{3} \bigl( \bigl\vert u_{n}(t)-u(t) \bigr\vert ^{{p}-1}+ \bigl\vert u(t) \bigr\vert ^{{p}-1} \bigr), \end{aligned}$$ where \(M_{3}\) is a positive constant. By (2), (3), (4) and Lemma 2 one gets $$\begin{aligned}& \int _{\mathbb{R}} \bigl\vert \nabla W_{1}(t,u_{n})- \nabla W_{1}(t,u) \bigr\vert ^{{q}}\,dt \\& \quad \leq M_{3}^{q} \int _{\mathbb{R}} \bigl( \bigl\vert u_{n}(t)-u(t) \bigr\vert ^{{p}-1}+ \bigl\vert u(t) \bigr\vert ^{ {p}-1} \bigr)^{{q}}\,dt \\& \quad \leq 2^{{q}-1}M_{3}^{q} \int _{\mathbb{R}} \bigl( \bigl\vert u_{n}(t)-u(t) \bigr\vert ^{{p}}+ \bigl\vert u(t) \bigr\vert ^{p} \bigr)\,dt \\& \quad \leq 2^{{q}-1}M_{3}^{{q}} \Vert u_{n}-u \Vert ^{p-1}_{\infty } \int _{ \mathbb{R}} \bigl\vert u_{n}(t)-u(t) \bigr\vert \,dt+2^{{q}-1}M_{3}^{{q}} \Vert u \Vert ^{p}_{p} \\& \quad \leq 2^{{q}-1}M_{3}^{{q}}(2M_{1})^{p-1} \int _{\mathbb{R}} \bigl\vert u_{n}(t)-u(t) \bigr\vert \,dt+2^{ {q}-1}M_{3}^{{q}}C^{p}_{p} \Vert u \Vert ^{p} \\& \quad < +\infty . \end{aligned}$$ Using Lebesgue's dominated convergence theorem, we can get the conclusion. □ The corresponding functional of (1) is defined by $$\begin{aligned} I(u) =& \int _{\mathbb{R}}\frac{1}{p} \bigl( \bigl\vert \dot{u}(t) \bigr\vert ^{p}+a(t) \bigl\vert u(t) \bigr\vert ^{p} \bigr)\,dt- \int _{\mathbb{R}} W \bigl(t,u(t) \bigr)\,dt \\ =&\frac{1}{p} \Vert u \Vert ^{p}- \int _{\mathbb{R}} W \bigl(t,u(t) \bigr)\,dt. \end{aligned}$$ For convenience, let $$\begin{aligned} J(u) =& \int _{\mathbb{R}}\frac{1}{p} \bigl( \bigl\vert \dot{u}(t) \bigr\vert ^{p}+a(t) \bigl\vert u(t) \bigr\vert ^{p} \bigr)\,dt, \\ \varPhi (u) =& \int _{\mathbb{R}} W_{1} \bigl(t,u(t) \bigr)\,dt, \\ \varPsi (u) =& \int _{ \mathbb{R}} W_{2} \bigl(t,u(t) \bigr)\,dt. \end{aligned}$$ \(J\in C^{1}(E,\mathbb{R})\) and $$\begin{aligned} \bigl\langle J'(u),v \bigr\rangle = \int _{\mathbb{R}} \bigl[ \bigl\vert \dot{u}(t) \bigr\vert ^{p-2} \bigl( \dot{u}(t),\dot{v}(t) \bigr)+a(t) \bigl\vert u(t) \bigr\vert ^{p-2} \bigl(u(t),v(t) \bigr) \bigr] \,dt,\quad \forall u,v\in E. \end{aligned}$$ Under the conditions of Theorem 1, \(I\in C^{1}(E, \mathbb{R})\). Moreover, one has $$\begin{aligned} \bigl\langle I'(u),v \bigr\rangle =& \int _{\mathbb{R}} \bigl[ \bigl\vert \dot{u}(t) \bigr\vert ^{p-2} \bigl( \dot{u}(t),\dot{v}(t) \bigr)+a(t) \bigl\vert u(t) \bigr\vert ^{p-2} \bigl(u(t),v(t) \bigr) \\ &{}- \bigl(\nabla W \bigl(t,u(t) \bigr),v(t) \bigr) \bigr]\,dt, \quad \forall u,v \in E. \end{aligned}$$ The critical points ofIinEare homoclinic solutions of (1) with\(u(\pm \infty )=\dot{u}(\pm \infty )=0\). Since it is routine to prove that (i) holds, we just need to prove (ii) and (iii). First, we show I in (5) is well defined. By \((W_{0})\) and \((W_{1})\), for any \(\varepsilon >0\), there is \(C_{\varepsilon }>0\) such that $$\begin{aligned} \bigl\vert W_{1}(t,x) \bigr\vert \leq \varepsilon \vert x \vert ^{p}+C_{\varepsilon } \vert x \vert ^{\mu }, \quad \forall (t,x)\in \mathbb{R}\times \mathbb{R}^{N}. \end{aligned}$$ Then by (2) and (7) one gets $$\begin{aligned} \int _{\mathbb{R}} \bigl\vert W_{1} \bigl(t,u(t) \bigr) \bigr\vert \,dt \leq & \varepsilon \int _{ \mathbb{R}} \bigl\vert u(t) \bigr\vert ^{p} \,dt+C_{\varepsilon } \int _{\mathbb{R}} \bigl\vert u(t) \bigr\vert ^{ \mu } \,dt \leq \varepsilon C^{p}_{p} \Vert u \Vert ^{p} +C_{\varepsilon }C^{\mu }_{ \mu } \Vert u \Vert ^{\mu }< +\infty . \end{aligned}$$ Besides, by (2), \((W_{6})\), \((W_{7})\) and Hölder's inequality we have $$\begin{aligned} \int _{\mathbb{R}} \bigl\vert W_{2} \bigl(t,u(t) \bigr) \bigr\vert \,dt \leq &\frac{1}{\theta } \int _{\mathbb{R}} c(t) \bigl\vert u(t) \bigr\vert ^{\theta }\,dt \\ \leq &\frac{1}{\theta } \Vert c \Vert _{\zeta } \Vert u \Vert ^{\theta }_{\theta \zeta ^{*}} \\ \leq &\frac{C^{\theta }_{\theta \zeta ^{*}}}{\theta } \Vert c \Vert _{\zeta } \Vert u \Vert ^{\theta }< +\infty . \end{aligned}$$ Therefore I is well defined. Next, we show that \(I\in C^{1}(E, \mathbb{R})\). In view of (i), it is sufficient to show that \(\varPhi \in C^{1}(E,\mathbb{R})\) and \(\varPsi \in C^{1}(E,\mathbb{R})\). Let \(\phi (u)\) be as follows: $$\begin{aligned} \phi (u)v= \int _{\mathbb{R}} \bigl(\nabla W_{1} \bigl(t,u(t) \bigr),v(t) \bigr)\,dt,\quad \forall v \in E. \end{aligned}$$ Obviously, \(\phi (u)\) is linear. We show \(\phi (u)\) is bounded in the following proof. By (2), (9), \((W_{0})\) and Hölder's inequality, one has $$\begin{aligned} \bigl\vert \phi (u)v \bigr\vert \leq & c_{1} \int _{\mathbb{R}} \bigl\vert u(t) \bigr\vert ^{\mu -1} \bigl\vert v(t) \bigr\vert \,dt+c _{2} \int _{\mathbb{R}} \bigl\vert v(t) \bigr\vert \,dt \\ \leq & c_{1} \biggl( \int _{\mathbb{R}} \bigl\vert u(t) \bigr\vert ^{(\mu -1)\mu ^{*}} \,dt \biggr) ^{\frac{1}{\mu ^{*}}} \biggl( \int \bigl\vert v(t) \bigr\vert ^{\mu }\,dt \biggr)^{\frac{1}{ \mu }}+c_{2} \Vert v \Vert _{1} \\ \leq &c_{1} \Vert u \Vert _{\mu }^{\frac{\mu }{\mu ^{*}}} \Vert v \Vert _{\mu }+c_{2}C _{1} \Vert v \Vert \\ \leq & \bigl(c_{1}C_{\mu }^{\frac{\mu }{\mu ^{*}}+1} \Vert u \Vert ^{\frac{ \mu }{\mu ^{*}}}+c_{2}C_{1} \bigr) \Vert v \Vert , \end{aligned}$$ where \(\mu ^{*}\) is the conjugate component of μ. It follows from (10) that \(\phi (u)\) is bounded. Subsequently, we show that Φ is of \(C^{1}\) class. For any \(u,v\in E\), by the mean value theorem, (\(W_{0}\)) and Hölder's inequality, one gets $$\begin{aligned}& \biggl\vert \int _{\mathbb{R}} W_{1} \bigl(t, u(t)+v(t) \bigr)\,dt- \int _{\mathbb{R}}(W _{1} \bigl(t,u(t) \bigr)\,dt \biggr\vert \\& \quad = \biggl\vert \int _{\mathbb{R}}(\nabla W_{1} \bigl(t, u(t)+h(t)v(t),v(t) \bigr)\,dt \biggr\vert \\& \quad \leq c_{1} \int _{\mathbb{R}} \bigl\vert u(t)+h(t)v(t) \bigr\vert ^{\mu -1} \bigl\vert v(t) \bigr\vert \,dt +c_{2} \int _{\mathbb{R}} \bigl\vert v(t) \bigr\vert \,dt \\& \quad \leq c_{1} \Vert u+hv \Vert _{\mu }^{\frac{\mu }{\mu ^{*}}} \Vert v \Vert _{\mu } +c_{2}C_{1} \Vert v \Vert \\& \quad \leq \bigl(c_{1}C_{\mu }^{\frac{\mu }{\mu ^{*}}+1} \Vert u+hv \Vert ^{\frac{ \mu }{\mu ^{*}}}+c_{2}C_{1} \bigr) \Vert v \Vert , \end{aligned}$$ where \(h(t)\in (0,1)\). Combining (10) and (11), we get $$\begin{aligned} \int _{\mathbb{R}} W_{1} \bigl(t,u(t)+v(t) \bigr)\,dt- \int _{\mathbb{R}} W_{1} \bigl(t,u(t) \bigr)\,dt- \int _{\mathbb{R}} \bigl(\nabla W_{1} \bigl(t, u(t) \bigr),v(t) \bigr)\,dt\to 0 \end{aligned}$$ as \(v\to 0\) in E, which shows $$\begin{aligned} \bigl\langle \varPhi '(u),v \bigr\rangle = \int _{\mathbb{R}} \bigl(\nabla W_{1} \bigl(u(t) \bigr),v(t) \bigr)\,dt \end{aligned}$$ for any \(u, v\in E\). It remains to prove that \(\varPhi '\) is continuous. Assume that \(u\to u_{0}\) in E and note that $$\begin{aligned}& \sup_{ \Vert v \Vert =1} \bigl\vert \bigl\langle \varPhi '(u),v \bigr\rangle - \bigl\langle \varPhi '(u_{0}),v \bigr\rangle \bigr\vert \\& \quad =\sup_{ \Vert v \Vert =1} \biggl\vert \int _{\mathbb{R}} \bigl(\nabla W_{1} \bigl(t, u(t) \bigr)- \nabla W_{1} \bigl(t,u_{0}(t) \bigr),v(t) \bigr) \,dt \biggr\vert \\& \quad \leq \sup_{ \Vert v \Vert =1} \bigl\Vert \nabla W_{1}(t,u)- \nabla W_{1}(t,u_{0}) \bigr\Vert _{q} \biggl( \int _{\mathbb{R}} \bigl\vert v(t) \bigr\vert ^{{p}} \,dt \biggr)^{\frac{1}{p}} \\& \quad \leq \sup_{ \Vert v \Vert =1} \bigl\Vert \nabla W_{1}(t,u)- \nabla W_{1}(t,u_{0}) \bigr\Vert _{q} \biggl( \int _{\mathbb{R}} \bigl\vert v(t) \bigr\vert ^{{p}} \,dt \biggr)^{\frac{1}{p}} \\& \quad \leq C_{p}\sup_{ \Vert v \Vert =1} \bigl\Vert \nabla W_{1}(t,u)-\nabla W_{1}(t,u_{0}) \bigr\Vert _{q}. \end{aligned}$$ Then, by Lemma 3, we have \(\langle \varPhi '(u),v\rangle \to \langle \varPhi '(u_{0}),v\rangle \) as \(\|u\|\to \|u_{0}\|\) uniformly with respect to v, which shows that \(\varPhi '\) is continuous. Moreover, by \((W_{6})\) and \((W_{7})\) one has $$\begin{aligned} \biggl\vert \int _{\mathbb{R}} \bigl(\nabla W_{2} \bigl(t,u(t) \bigr),v(t) \bigr)\,dt \biggr\vert \leq & \int _{\mathbb{R}} c(t) \bigl\vert u(t) \bigr\vert ^{\theta -1} \bigl\vert v(t) \bigr\vert \,dt \\ \leq & \Vert u \Vert _{\zeta ^{*}(\theta -1)}^{\theta -1} \biggl( \int _{ \mathbb{R}}c^{\zeta }(t)\,dt \biggr)^{\frac{1}{\zeta }} \Vert v \Vert _{\infty } \end{aligned}$$ for any \(u, v\in E\). Similar to the above proof, we can see that $$\begin{aligned} \bigl\langle \varPsi '(u),v \bigr\rangle = \int _{\mathbb{R}} \bigl(\nabla W_{2} \bigl(u(t) \bigr),v(t) \bigr)\,dt \end{aligned}$$ for any \(u, v\in E\). Now we prove that \(\varPsi '\) is continuous. Suppose that \(u\to u_{0}\) in E. By \((W_{6})\), for any \(\varepsilon >0\), there exists \(T>0\) such that $$\begin{aligned} \biggl( \int _{|t|>T}c^{\zeta }(t)\,dt \biggr)^{\frac{1}{\zeta }}< \varepsilon . \end{aligned}$$ On account of the continuity of \(\nabla W_{2}(t,x)\) and \(u\to u_{0}\) in \(L^{\infty }_{\mathrm{loc}}(\mathbb{R},\mathbb{R}^{N})\), it follows that $$\begin{aligned} \int _{|t|\leq T} \bigl(\nabla W_{2} \bigl(t, u(t) \bigr)- \nabla W_{2} \bigl(t,u_{0}(t) \bigr),v(t) \bigr) \,dt< \varepsilon . \end{aligned}$$ By (12), (13), \((W_{6})\), \((W_{7})\) and Hölder's inequality, one gets $$\begin{aligned}& \sup_{ \Vert v \Vert =1} \bigl\vert \bigl\langle \varPsi '(u),v \bigr\rangle - \bigl\langle \varPsi '(u_{0}),v \bigr\rangle \bigr\vert \\& \quad =\sup_{ \Vert v \Vert =1} \biggl\vert \int _{\mathbb{R}} \bigl(\nabla W_{2} \bigl(t, u(t) \bigr)- \nabla W_{2} \bigl(t,u_{0}(t) \bigr),v(t) \bigr) \,dt \biggr\vert \\& \quad \leq \sup_{ \Vert v \Vert =1} \biggl\vert \int _{ \vert t \vert \leq T} \bigl(\nabla W_{2} \bigl(t, u(t) \bigr)- \nabla W_{2} \bigl(t,u_{0}(t) \bigr),v(t) \bigr) \,dt \biggr\vert \\& \qquad {}+\sup_{ \Vert v \Vert =1} \biggl\vert \int _{ \vert t \vert > T} \bigl(\nabla W_{2} \bigl(t, u(t) \bigr)- \nabla W _{2} \bigl(t,u_{0}(t) \bigr),v(t) \bigr) \,dt \biggr\vert \\& \quad \leq \varepsilon +\sup_{ \Vert v \Vert =1} \biggl\vert \int _{ \vert t \vert >T} c(t) \bigl( \bigl\vert u(t) \bigr\vert ^{ \theta -1}+ \bigl\vert u_{0}(t) \bigr\vert ^{\theta -1} \bigr) \bigl\vert v(t) \bigr\vert \,dt \biggr\vert \\& \quad \leq \varepsilon +C_{\infty } \biggl( \int _{ \vert t \vert >T} c^{\zeta }(t) \,dt \biggr) ^{\frac{1}{\zeta }} \bigl( \Vert u \Vert ^{\theta -1}_{(\theta -1)\zeta ^{*}}+ \Vert u_{0} \Vert ^{\theta -1}_{(\theta -1)\zeta ^{*}} \bigr) \\& \quad \leq \varepsilon +\varepsilon C_{\infty } \bigl( \Vert u \Vert ^{\theta -1}_{( \theta -1)\zeta ^{*}}+ \Vert u_{0} \Vert ^{\theta -1}_{(\theta -1)\zeta ^{*}} \bigr), \end{aligned}$$ which shows that \(\varPsi '\) is continuous. Thus (ii) holds. Finally, similar to the proof of Lemma 3.1 in [21], one can check that (iii) holds. □ Subsequently, we display the useful critical points theorem. LetEa real Banach space and\(I:E\to \mathbb{R}\)be a\(C^{1}\)-smooth functional and satisfy the\((C)\)condition, that is, \(\{u_{n}\}\)has a convergent subsequence inEwhenever\(\{I(u_{n})\}\)is bounded and\(\|I'(u_{n})\|_{E^{*}}(1+\|u _{n}\|)\to 0\)as\(n\to +\infty \). IfIsatisfies the following conditions: \(I(0)=0\); there exist constants\(\varrho , \alpha >0\)such that\(I|_{\partial B_{\varrho }(0)}\geq \alpha \); there exists\(e\in E\setminus \bar{B}_{\varrho }(0)\)such that\(I(e)\leq 0\), where\(B_{\varrho }(0)\)is an open ball inEof radiusϱcentered at 0, thenIpossesses a critical value\(c\geq \alpha \)given by $$\begin{aligned} c=\inf_{g\in \varGamma }\max_{s\in [0,1]}I \bigl(g(s) \bigr), \end{aligned}$$ $$ \varGamma = \bigl\{ g\in C \bigl([0,1],E \bigr): g(0)=0, g(1)=e \bigr\} . $$ Assume that the conditions of Theorem 1hold, thenIsatisfies the\((C)\)condition. Suppose that \(\{u_{n}\}\subset E\) is a sequence such that \(\{I(u_{n}) \}\) is bounded and \(\|I'(u_{n})\|_{E^{*}}(1+\|u_{n}\|)\to 0\) as \(n\to +\infty \). Then there exists a constant \(M_{4}>0\) such that $$\begin{aligned} \bigl\vert I(u_{n}) \bigr\vert \leq M_{4}, \qquad \bigl\Vert I'(u_{n}) \bigr\Vert _{E^{*}} \bigl(1+ \Vert u_{n} \Vert \bigr)\leq M_{4}. \end{aligned}$$ Now we prove that \(\{u_{n}\}\) is bounded in E. Arguing in an indirect way, we assume that \(\|u_{n}\|\to +\infty \) as \(n\to +\infty \). Set \(z_{n}=\frac{u_{n}}{\|u_{n}\|}\), then \(\|z_{n}\|=1\), which implies that there exists a subsequence of \(\{z_{n}\}\), still denoted by \(\{z_{n}\}\), such that \(z_{n}\rightharpoonup z_{0}\) in E. By (2), (5), (8) and (14), we obtain $$\begin{aligned} \biggl\vert \int _{\mathbb{R}} \frac{W_{1}(t,u_{n})}{ \Vert u_{n} \Vert ^{p}}\,dt- \frac{1}{p} \biggr\vert =& \biggl\vert \frac{I(u_{n})}{ \Vert u_{n} \Vert ^{p}}+ \int _{\mathbb{R}}\frac{W_{2}(t,u_{n})}{ \Vert u_{n} \Vert ^{p}}\,dt \biggr\vert \\ \leq &\frac{M_{4}}{ \Vert u_{n} \Vert ^{p}}+\frac{ \Vert c \Vert _{\zeta }C^{\theta } _{\theta \zeta ^{*}} \Vert u_{n} \Vert ^{\theta }}{\theta \Vert u_{n} \Vert ^{p}} \\ \to &0 \quad \text{as } n\to +\infty . \end{aligned}$$ In the following, we consider two opposite cases. Case 1: \(z_{0}\not \equiv 0\). Let \(\varOmega =\{t\in \mathbb{R}||z_{0}(t)|>0 \}\). Then we can see that \(\text{meas}(\varOmega )>0\), where meas denotes the Lebesgue measure. Then there exists \(\chi >0\) such that \(\operatorname{meas}(\varLambda )>0\), where \(\varLambda =\varOmega \cap P_{\chi }\) and \(P_{\chi }=\{t\in \mathbb{R}||t| \leq \chi \}\). Since \(\|u_{n}\|\to +\infty \) as \(n\to +\infty \), we have \(|u_{n}(t)|\to +\infty \) as \(n\to +\infty \) for a.e. \(t\in \varLambda \). By \((W_{2})\), \((W_{3})\) and Fatou's lemma, one can get $$\begin{aligned}& \lim_{n\to +\infty } \int _{\mathbb{R}} \frac{W_{1}(t,u_{n}(t))}{ \Vert u _{n} \Vert ^{p}}\,dt \\& \quad =\lim_{n\to +\infty } \int _{\varLambda } \frac{W_{1}(t,u_{n}(t))}{ \Vert u _{n} \Vert ^{p}}\,dt+\lim _{n\to +\infty } \int _{\mathbb{R}\setminus \varLambda } \frac{W_{1}(t,u_{n}(t))}{ \Vert u_{n} \Vert ^{p}}\,dt \\& \quad \geq \lim_{n\to +\infty } \int _{\varLambda } \frac{W_{1}(t,u_{n}(t))}{ \vert u _{n}(t) \vert ^{{p}}} \bigl\vert z_{n}(t) \bigr\vert ^{p}\,dt-d_{1} \int _{\mathbb{R}\setminus \varLambda } \bigl\vert z_{n}(t) \bigr\vert ^{p}\,dt \\& \quad \geq \lim_{n\to +\infty } \int _{\varLambda } \frac{W_{1}(t,u_{n}(t))}{ \vert u _{n}(t) \vert ^{{p}}} \bigl\vert z_{n}(t) \bigr\vert ^{{p}}\,dt-d_{1}C^{p}_{p} \Vert z_{n} \Vert ^{p} \\& \quad =+\infty , \end{aligned}$$ which contradicts (15). So \(\|u_{n}\|\) is bounded in this case. Case 2: \(z_{0}\equiv 0\). Set $$\begin{aligned} \widetilde{W_{1}}(t,x)= \bigl(\nabla W_{1}(t,x),x \bigr)- \nu W_{1}(t,x), \end{aligned}$$ where ν is defined in \((W_{4})\). From \((W_{1})\), we can deduce that \(\widetilde{W_{1}}(t,x)=o(|x|^{{p}})\) as \(|x|\to 0\), then there exists \(\rho _{1}\in (0,\rho _{0})\) such that $$\begin{aligned} \bigl\vert \widetilde{W_{1}}(t,x) \bigr\vert \leq \vert x \vert ^{p} \end{aligned}$$ for all \(|x|\leq \rho _{1}\), where \(\rho _{0}\) is defined in \((W_{4})\). It follows from (6), (8), (14), (16), \((W_{4})\) and \((W_{6})\) that $$\begin{aligned} o(1) =&\frac{\nu M_{4}+M_{4}}{ \Vert u_{n} \Vert ^{p}} \\ \geq& \frac{\nu I(u_{n})-\langle I'(u_{n}),u_{n}\rangle }{ \Vert u_{n} \Vert ^{p}} \\ \geq& \biggl(\frac{\nu }{p}-1 \biggr)+\frac{1}{ \Vert u_{n} \Vert ^{p}} \int _{\mathbb{R}}\widetilde{W_{1}} \bigl(t,u_{n}(t) \bigr)\,dt-\frac{\nu +\theta }{ \theta \Vert u_{n} \Vert ^{p}} \Vert c \Vert _{\zeta }C^{\theta }_{\theta \zeta ^{*}} \Vert u _{n} \Vert ^{\theta } \\ \geq& \biggl(\frac{\nu }{{p}}-1 \biggr)+ \frac{1}{ \Vert u_{n} \Vert ^{p}} \int _{ \vert u_{n} \vert \leq \rho _{1}}\widetilde{W_{1}} \bigl(t,u_{n}(t) \bigr)\,dt+\frac{1}{ \Vert u_{n} \Vert ^{p}} \int _{\rho _{1}< \vert u_{n} \vert \leq \rho _{0}}\widetilde{W_{1}} \bigl(t,u _{n}(t) \bigr)\,dt \\ &{}+\frac{1}{ \Vert u_{n} \Vert ^{p}} \int _{ \vert u_{n} \vert > \rho _{0}}\widetilde{W_{1}} \bigl(t,u _{n}(t) \bigr)\,dt-o(1) \\ \geq &\biggl(\frac{\nu }{p}-1 \biggr)-\frac{1}{ \Vert u_{n} \Vert ^{p}} \biggl( \int _{ \vert u_{n} \vert \leq \rho _{1}} \bigl\vert u_{n}(t) \bigr\vert ^{p}\,dt+d_{2} \int _{ \vert u_{n} \vert >\rho _{0}} \bigl\vert u _{n}(t) \bigr\vert ^{p}\,dt \biggr) \\ &{}-\frac{\max_{\rho _{1}< \vert x \vert \leq \rho _{0}} \vert \widetilde{W_{1}}(t,x) \vert }{ \rho ^{p}_{1}} \int _{\rho _{1}< \vert u_{n} \vert \leq \rho _{0}}\frac{ \vert u_{n}(t) \vert ^{p}}{ \Vert u_{n} \Vert ^{p}}\,dt-o(1) \\ \geq &\biggl(\frac{\nu }{p}-1 \biggr)- \biggl(1+d_{2}+ \frac{ \max_{\rho _{1}< \vert x \vert \leq \rho _{0}} \vert \widetilde{W_{1}}(t,x) \vert }{\rho ^{p} _{1}} \biggr) \int _{\mathbb{R}} \bigl\vert z_{n}(t) \bigr\vert ^{p}\,dt-o(1) \\ \to& \frac{\nu }{p}-1\quad \text{as } n\to +\infty , \end{aligned}$$ which is a contradiction. Therefore, \(\|u_{n}\|\) is bounded. Going if necessary to a subsequence, we can assume that \(u_{n}\rightharpoonup u\) in E, which yields $$\begin{aligned} \bigl\langle I'(u_{n})-I'(u),u_{n}-u \bigr\rangle =& \Vert u_{n}-u \Vert ^{p} \\ &{}- \int _{\mathbb{R}} \bigl(\nabla W_{1} \bigl(t,u_{n}(t) \bigr)-\nabla W_{1} \bigl(t,u(t) \bigr),u _{n}(t)-u(t) \bigr)\,dt \\ &{}- \int _{\mathbb{R}} \bigl(\nabla W_{2} \bigl(t,u_{n}(t) \bigr)-\nabla W_{2} \bigl(t,u(t) \bigr),u _{n}(t)-u(t) \bigr)\,dt \\ \to &0 \quad \text{as}\ n\to +\infty . \end{aligned}$$ It follows from (2), \((W_{0})\) and Lemma 2 that $$\begin{aligned}& \int _{\mathbb{R}} \bigl(\nabla W_{1} \bigl(t,u_{n}(t) \bigr)-\nabla W_{1} \bigl(t,u(t) \bigr), u _{n}(t)-u(t) \bigr)\,dt \\& \quad \leq \int _{\mathbb{R}} \bigl(c_{1} \bigl\vert u_{n}(t) \bigr\vert ^{\mu -1}+c_{1} \bigl\vert u(t) \bigr\vert ^{\mu -1}+2c _{2} \bigr) \bigl\vert u_{n}(t)-u(t) \bigr\vert \,dt \\& \quad \leq \bigl(c_{1}C^{\mu -1}_{\infty } \Vert u_{n} \Vert ^{\mu -1}+c_{1}C^{\mu -1} _{\infty } \Vert u \Vert ^{\mu -1}+2c_{2} \bigr) \Vert u_{n}-u \Vert _{1} \\& \quad \to 0 \quad \text{as } n\to +\infty . \end{aligned}$$ On account of the continuity of \(\nabla W_{2}(t,x)\) and \(u_{n}\to u\) in \(L^{\infty }_{\mathrm{loc}}(\mathbb{R},\mathbb{R}^{N})\), there exists \(n_{0}\in \mathbb{N}\) such that $$\begin{aligned} \int _{|t|\leq T} \bigl(\nabla W_{2} \bigl(t, u_{n}(t) \bigr)-\nabla W_{2} \bigl(t,u(t) \bigr),u_{n}(t)-u(t) \bigr)\,dt< \varepsilon ,\quad \forall n\geq n_{0}, \end{aligned}$$ where T is defined in (12). In addition, by (12), \((W_{7})\) and Hölder's inequality, we have $$\begin{aligned}& \int _{ \vert t \vert >T} \bigl(\nabla W_{2} \bigl(t,u_{n}(t) \bigr)-\nabla W_{2} \bigl(t,u(t) \bigr), u_{n}(t)-u(t) \bigr)\,dt \\& \quad \leq \int _{ \vert t \vert >T}c(t) \bigl( \bigl\vert u_{n}(t) \bigr\vert ^{\theta -1}+ \bigl\vert u(t) \bigr\vert ^{\theta -1} \bigr) \bigl\vert u _{n}(t)-u(t) \bigr\vert \,dt \\& \quad \leq \Vert u_{n}-u \Vert _{\infty } \biggl( \int _{ \vert t \vert >T}c^{\zeta }(t)\,dt \biggr) ^{\frac{1}{\zeta }} \bigl( \Vert u_{n} \Vert ^{\theta -1}_{\zeta ^{*}(\theta -1)}+ \Vert u \Vert ^{\theta -1}_{\zeta ^{*}(\theta -1)} \bigr) \\& \quad \leq \varepsilon \Vert u_{n}-u \Vert _{\infty } \bigl( \Vert u_{n} \Vert ^{\theta -1} _{\zeta ^{*}(\theta -1)}+ \Vert u \Vert ^{\theta -1}_{\zeta ^{*}(\theta -1)} \bigr). \end{aligned}$$ Hence, by (17)–(20) we conclude that \(\|u_{n}-u\|\to 0\) as \(n\to +\infty \), which means that the \((C)\) condition is fulfilled. □ Suppose that the conditions of Theorem 1hold, then there exist\(\varrho _{1}\), \(\alpha _{1}>0\)such that\(I|_{\partial B_{\varrho _{1}}} \geq \alpha _{1}\), where\(B_{\varrho _{1}}=\{u\in E: \|u\|\leq \varrho _{1}\}\). In view of (7) and (8), for any \(u\in E\) and \(0<\varepsilon <(pC_{p}^{p})^{-1}\), we have $$\begin{aligned} I(u) =& \frac{1}{p} \Vert u \Vert ^{p}- \int _{\mathbb{R}}W_{1}(t,u)\,dt- \int _{\mathbb{R}}W_{2}(t,u)\,dt \\ \geq &\frac{1}{p} \Vert u \Vert ^{p}-\varepsilon \int _{\mathbb{R}} \vert u \vert ^{p}\,dt-C _{\varepsilon } \int _{\mathbb{R}} \vert u \vert ^{\mu }\,dt- \frac{C^{\theta }_{ \theta \zeta ^{*}}}{\theta } \Vert c \Vert _{\zeta } \Vert u \Vert ^{\theta } \\ \geq &\frac{1}{p} \Vert u \Vert ^{p}-\varepsilon C^{p}_{p} \Vert u \Vert ^{p} -C_{\varepsilon }C^{\mu }_{\mu } \Vert u \Vert ^{\mu }-\frac{C^{\theta }_{\theta \zeta ^{*}}}{ \theta } \Vert c \Vert _{\zeta } \Vert u \Vert ^{\theta } \\ \geq & \biggl(\frac{1}{p}-\varepsilon C^{p}_{p} \biggr) \Vert u \Vert ^{p}-C _{\varepsilon }C^{\mu }_{\mu } \Vert u \Vert ^{\mu }- \frac{C^{\theta }_{\theta \zeta ^{*}}}{\theta } \Vert c \Vert _{\zeta } \Vert u \Vert ^{\theta }, \end{aligned}$$ which combined with \((W_{6})\) implies that there exist positive constants \(\varrho _{1}\) and \(\alpha _{1}\) such that \(I|_{\partial B _{\varrho _{1}}}\geq \alpha _{1}\). □ Assume that the conditions of Theorem 1hold, then there exists\(v_{1}\in E\)such that\(\|v_{1}\|>\varrho _{1}\)and\(I(v_{1})\leq 0\), where\(\varrho _{1}\)is defined in Lemma 7. We choose \(v_{0}\in C_{0}^{\infty }([-1,1], \mathbb{R}^{N})\) such that \(\|v_{0}\|=1\). For \(\beta >(p\int ^{1}_{-1}|v_{0}(t)|^{p}\,dt)^{-1}\), it follows from \((W_{2})\) that there exists \(\tau >0\) such that $$\begin{aligned} W(t,x)\geq \beta \vert x \vert ^{{p}} \end{aligned}$$ for all \(|x|\geq \tau \). By \((W_{3})\), we get $$\begin{aligned} W(t,x)\geq \beta \bigl( \vert x \vert ^{p}-\tau ^{p} \bigr)-d_{1}\tau ^{p} \end{aligned}$$ for all \((t,x)\in \mathbb{R}\times \mathbb{R}^{N}\). For \(\eta >0\), by (21) and \((W_{5})\) we have $$\begin{aligned} I(\eta v_{0}) =&\frac{\eta ^{p}}{p}- \int ^{1}_{-1}W_{1} \bigl(t, \eta v_{0}(t) \bigr)\,dt- \int ^{1}_{-1}W_{2} \bigl(t, \eta v_{0}(t) \bigr)\,dt \\ \leq &\frac{\eta ^{p}}{p}- \int ^{1}_{-1}W_{1} \bigl(t, \eta v_{0}(t) \bigr)\,dt \\ \leq &\frac{\eta ^{p}}{p}- \int ^{1}_{-1}\beta \bigl\vert \eta v_{0}(t) \bigr\vert ^{p}\,dt+ \beta \int ^{1}_{-1}\tau ^{p} \,dt+d_{1} \int ^{1}_{-1}\tau ^{p}\,dt \\ \leq & \biggl(\frac{1}{p}-\beta \int ^{1}_{-1} \bigl\vert v_{0}(t) \bigr\vert ^{p}\,dt \biggr) \eta ^{p}+2\beta \tau ^{p}+2d_{1}\tau ^{p}, \end{aligned}$$ $$\begin{aligned} I(\eta v_{0})\to -\infty \quad \text{as } \eta \to +\infty . \end{aligned}$$ Therefore, there exists \(\eta _{0}>0\) such that \(I(\eta _{0} v_{0})<0\). Let \(v_{1}=\eta _{0} v_{0}\), we can see \(I(v_{1})<0\), which proves this lemma. □ By Lemmas 4–8, we can see that I possesses at least one nontrivial critical point. Then the critical point is the first homoclinic solution to (1). To get the second solution, we just need to prove that \(\inf_{u\in B_{\varrho _{1}}} I(u)<0\), where \(B_{\varrho _{1}}\) is defined in Lemma 7. We choose \(v_{2}\in C^{\infty }_{0}([-1,1], \mathbb{R}^{N})\setminus \{0\}\). Then, by \((W_{3})\) and \((W_{5})\), for any \(l>0\) we get $$\begin{aligned} I(lv_{2}) =& \frac{l^{p}}{p} \Vert v_{2} \Vert ^{p}- \int ^{1}_{-1} W_{1} \bigl(t,lv _{2}(t) \bigr)\,dt- \int ^{1}_{-1} W_{2} \bigl(t,lv_{2}(t) \bigr)\,dt \\ \leq &\frac{l^{p}}{p} \Vert v_{2} \Vert ^{p}+d_{1}l^{p} \int ^{1}_{-1} \bigl\vert v_{2}(t) \bigr\vert ^{p}\,dt-l ^{\theta } \int ^{1}_{-1}b(t) \bigl\vert v_{2}(t) \bigr\vert ^{\theta }\,dt \\ \leq &\frac{l^{p}}{p} \Vert v_{2} \Vert ^{p}+d_{1}l^{p} \int ^{1}_{-1} \bigl\vert v_{2}(t) \bigr\vert ^{p}\,dt-l ^{\theta } \Bigl(\min _{t\in [-1,1]}b(t) \Bigr) \int ^{1}_{-1} \bigl\vert v_{2}(t) \bigr\vert ^{ \theta }\,dt \\ < &0 \end{aligned}$$ for l small enough, which implies that \(\delta _{1}= \inf_{u\in B_{\varrho _{1}}} I(u)<0\). Then it follows from Ekeland's variational principle that there exists a minimizing sequence \(\{v_{n}\}\subset B_{\varrho _{1}}\) such that $$\begin{aligned} \delta _{1}\leq I(v_{n})< \delta _{1}+ \frac{1}{n} \quad \text{and} \quad I(u)\geq I(v_{n})- \frac{1}{n} \Vert u-v_{n} \Vert \quad \text{for } u \in B_{\varrho _{1}}. \end{aligned}$$ Thus, \(\{v_{n}\}\) is a bounded \((PS)\) sequence, which means that it is also a (C) sequence. Then from Lemma 6, there exists \(u_{1}\in E\) such that \(I'(u_{1})=0\) and \(I(u_{1})<0\). In conclusion, problem (1) possesses at least two nontrivial homoclinic solutions. □ In this section, we still work in the Banach space Suppose that the condition\((A_{2})\)holds, the embedding\(E\hookrightarrow L^{1}(\mathbb{R}, \mathbb{R}^{N})\)is continuous and compact. Assume that \(\{u_{n}\}\subset E\) such that \(u_{n}\rightharpoonup 0\) in E. We will show that \(u_{n}\to 0\) in \(L_{1}(\mathbb{R}, \mathbb{R} ^{N})\). By the Banach–Steinhaus theorem, there exists \(M_{5}>0\) such that For any \(\varepsilon >0\), by condition (\(A_{2}\)) there is \(r_{0}>0\) such that $$\begin{aligned} \operatorname{meas} B_{\varepsilon }< \varepsilon , \end{aligned}$$ $$\begin{aligned} B_{\varepsilon }= \bigl\{ t\in \mathbb{R}\setminus (-r_{0}, r_{0}) |\ \vert t \vert ^{-\lambda p} a(t)< \varepsilon ^{-1} \bigr\} . \end{aligned}$$ $$\begin{aligned} D_{\varepsilon } &=\mathbb{R}\setminus \bigl((-r_{0}, r_{0})\cup B _{\varepsilon } \bigr), \\ \mu _{\varepsilon } &=\inf_{ t\in D_{\varepsilon }} \vert t \vert ^{-\lambda p}a(t), \end{aligned}$$ then \(\frac{1}{\mu _{\varepsilon }}\leq \varepsilon \). On the one hand, one has $$\begin{aligned} \int _{ \vert t \vert \geq r_{0}} \vert u_{n} \vert \,dt =& \int _{{B_{\varepsilon }}} \vert u_{n} \vert \,dt+ \int _{{D_{\varepsilon }}} \vert u_{n} \vert \,dt \\ \leq & \Vert u_{n} \Vert _{\infty }\cdot \text{meas}\ {B_{\varepsilon }}+ \int _{D_{\varepsilon }} \vert t \vert ^{\lambda } \vert u_{n} \vert \vert t \vert ^{-\lambda } \,dt \\ \leq &\varepsilon C_{\infty }M_{5}+ \biggl( \int _{D_{\varepsilon }} \vert t \vert ^{ \lambda p} \vert u_{n} \vert ^{p}\,dt \biggr)^{\frac{1}{p}} \biggl( \int _{ \vert t \vert \geq r _{0}} \vert t \vert ^{-\lambda q}\,dt \biggr)^{\frac{1}{q}} \\ \leq & \varepsilon C_{\infty }M_{5}+\delta _{2} \mu _{\varepsilon }^{- \frac{1}{p}} \biggl( \int _{D_{\varepsilon }}a(t) \vert u_{n} \vert ^{p}\,dt \biggr) ^{\frac{1}{p}} \\ \leq &\varepsilon C_{\infty }M_{5}+\varepsilon ^{\frac{1}{p}}\delta _{2}M_{5}, \end{aligned}$$ where \(\delta _{2}= (\int _{|t|\geq r_{0}}|t|^{-\lambda q}\,dt ) ^{\frac{1}{q}}\). On the other hand, it follows from the Sobolev compact embedding theorem that \(u_{n}\to 0\) in \(L^{1}((-r_{0},r_{0}), \mathbb{R}^{N})\). Therefore, the embedding \(E\hookrightarrow L^{1}( \mathbb{R},\mathbb{R}^{N})\) is compact. Now for \(\varepsilon =1\), by (22) we have $$\begin{aligned} \int _{ \vert t \vert \geq r_{0}} \vert u \vert \,dt\leq C_{\infty } \Vert u \Vert +\delta _{2} \Vert u \Vert = (C _{\infty }+\delta _{2}) \Vert u \Vert ,\quad \forall u\in E, \end{aligned}$$ which implies that the embedding is also continuous. □ By similar steps to the proof of Theorem 1, we can obtain the conclusion of Theorem 2. □ Chen, G.W.: Superquadratic or asymptotically quadratic Hamiltonian systems: ground state homoclinic orbits. Ann. Mat. Pura Appl. 194, 903–918 (2015) Coti-Zelati, V., Rabinowitz, P.H.: Homoclinic orbits for second order Hamiltonian systems possessing superquadratic potentials. J. Am. Math. Soc. 4, 693–727 (1991) Izydorek, M., Janczewska, J.: Homoclinic solutions for nonautonomous second order Hamiltonian systems with a coercive potential. J. Math. Anal. Appl. 335, 1119–1127 (2007) Li, X.F., Jia, J.: New homoclinic solutions for a class of second-order Hamiltonian systems with a mixed condition. Bound. Value Probl. 2018, 133 (2018) Lin, X.Y., Tang, X.H.: Infinitely many homoclinic orbits of second-order p-Laplacian systems. Taiwan. J. Math. 17(4), 1371–1393 (2013) Lu, S.P.: Homoclinic solutions for a nonlinear second order differential system with p-Laplacian operator. Nonlinear Anal., Real World Appl. 12(1), 525–534 (2011) Lv, X., Lu, S.: Homoclinic solutions for ordinary p-Laplacian systems. Appl. Math. Comput. 218, 5682–5692 (2012) Rabinowitz, P.H.: Minimax Methods in Critical Point Theory with Applications to Differential Equations. CBMS, Regional Conf. Ser. in Math., vol. 65. Amer. Math. Soc., Providence (1986) Rabinowitz, P.H., Tanaka, K.: Some results on connecting orbits for a class of Hamiltonian systems. Math. Z. 206, 473–499 (1990) Shi, X.B., Zhang, Q.F., Zhang, Q.M.: Existence of homoclinic orbits for a class of p-Laplacian systems in a weighted Sobolev space. Bound. Value Probl. 2013, 137 (2013) Sun, J., Wu, T.: Multiplicity and concentration of homoclinic solutions for some second order Hamiltonian system. Nonlinear Anal. 114, 105–115 (2015) Tang, X.H., Xiao, L.: Homoclinic solutions for ordinary p-Laplacian systems with a coercive potential. Nonlinear Anal. 71, 1124–1132 (2009) Tersian, S.: On symmetric positive homoclinic solutions of semilinear p-Laplacian differential equations. Bound. Value Probl. 2012, 121 (2012) Wan, L.L., Tang, C.L.: Existence and multiplicity of homoclinic orbits for second order Hamiltonian systems without (AR) condition. Discrete Contin. Dyn. Syst. 15, 255–271 (2011) Wu, D.L., Tang, C.L., Wu, X.P.: Homoclinic orbits for a class of second-order Hamiltonian systems with concave–convex nonlinearities. Electron. J. Qual. Theory Differ. Equ. 2018, 6 (2018) Yang, J., Zhang, F.B.: Infinitely many homoclinic orbits for the second-order Hamiltonian systems with super-quadratic potentials. Nonlinear Anal. 10, 1417–1423 (2009) Yang, M.H., Han, Z.Q.: Infinitely many homoclinic solutions for second-order Hamiltonian systems with odd nonlinearities. Nonlinear Anal. 74, 2635–2646 (2011) Yu, X., Lu, S.: A multiplicity result for periodic solutions of Liénard equations with an attractive singularity. Appl. Math. Comput. 346, 183–192 (2019) Zhang, Q.F., Tang, X.H.: Existence of homoclinic orbits for a class of asymptotically p-linear aperiodic p-Laplacian systems. Appl. Math. Comput. 218(13), 7164–7173 (2012) Zhang, X.Y.: Homoclinic orbits for a class of p-Laplacian systems with periodic assumption. Electron. J. Qual. Theory Differ. Equ. 2013(67), 1 (2013) Zhang, Z.H., Yuan, R.: Homoclinic orbits for \(p(t)\)-Laplacian Hamiltonian systems without coercive conditions. Mediterr. J. Math. 13(4), 1589–1611 (2016) Zhang, Z.H., Yuan, R.: Homoclinic solutions for p-Laplacian Hamiltonian systems with combined nonlinearities. Qual. Theory Dyn. Syst. 2017(16), 761–774 (2017) Zou, W.M., Li, S.J.: Infinitely many homoclinic orbits for the second-order Hamiltonian systems. Appl. Math. Lett. 16, 1283–1287 (2003) The author would like to thank the referees for their pertinent comments and valuable suggestions. School of Science, Southwest University of Science and Technology, Mianyang, China Lili Wan Search for Lili Wan in: The author read and approved the final manuscript. Correspondence to Lili Wan. The author declares that there is no conflict of interests regarding the publication of this paper. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Wan, L. Multiple homoclinic solutions for p-Laplacian Hamiltonian systems with concave–convex nonlinearities. Bound Value Probl 2020, 4 (2020) doi:10.1186/s13661-019-01317-z DOI: https://doi.org/10.1186/s13661-019-01317-z Homoclinic solutions p-Laplacian operator Variational methods
CommonCrawl
export.arXiv.org > astro-ph > arXiv:2004.11359 astro-ph.GA astro-ph astro-ph.HE hep-ex (refers to | cited by ) Astrophysics > Astrophysics of Galaxies Title: Optical spectral characterization of the the TeV extreme blazar 2WHSP J073326.7+515354 Authors: J. Becerra González, J. A. Acosta-Pulido, R. Clavero (Submitted on 23 Apr 2020) Abstract: The emission from the relativistic jets in blazars usually outshines their host galaxies, challenging the determination of their distances and the characterization of the stellar population. The situation becomes more favorable in the case of the extreme blazars (EHBLs), for which the bulk of the emission of the relativistic jets is emitted at higher energies, unveiling the optical emission from the host galaxy. The distance determination is fundamental for the study of the intrinsic characteristics of the blazars, especially to estimate the intrinsic gamma-ray spectra distorted due to the interaction with the Extragalactic Background Light. In this work we report on the properties of 2WHSP~J073326.7+515354 host galaxy in the optical band, which is one of the few EHBLs detected at TeV energies. We present the first measurement of the distance of the source, $\mathrm{z}=0.06504\pm0.00002$ (velocity dispersion $\sigma=237 \pm 9\,\mathrm{km s^{-1}}$). We also perform a detailed study of the stellar population of its host galaxy. We find that the mass-weighted mean stellar age is $11.72\pm0.06\,\mathrm{Gyr}$ and the mean metallicity $[M/H]=0.159 \pm 0.016$. In addition, a morphological study of the host galaxy is also carried out. The surface brightness distribution is modelled by a composition of a dominant classical bulge ($R_e=3.77\pm1\arcsec $ or equivalently 4.74~kpc) plus an unresolved source which corresponds to the active nucleus. The black hole mass is estimated using both the mass relation with the velocity dispersion and the absolute magnitude from the bulge yielding comparable results: $(4.8\pm0.9)\times10^8\,M_{\odot}$ and $(3.7\pm1.0)\times10^8\,M_{\odot}$, respectively. Comments: Accepted for publication in MNRAS Subjects: Astrophysics of Galaxies (astro-ph.GA); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Experiment (hep-ex) DOI: 10.1093/mnras/staa1144 Cite as: arXiv:2004.11359 [astro-ph.GA] (or arXiv:2004.11359v1 [astro-ph.GA] for this version) From: Josefa Becerra Gonzalez [view email] [v1] Thu, 23 Apr 2020 17:56:10 GMT (1153kb,D)
CommonCrawl
Fiona is people-watching again. She spies a group of ten high schoolers and starts playing a game by herself, in which she looks at a pair of people from the group of ten and tries to guess whether they like or dislike each other. How many pairs of friends can she observe before she runs out of pairs to evaluate? There are $10$ options for the first person and $9$ options left for the second person for a preliminary count of $10 \cdot 9 = 90$ pairs. However, the order in which Fiona chooses the people doesn't matter, and we've counted each pair twice, which means our final answer is $\dfrac{10\cdot9}{2}=\boxed{45}$ pairs of friends.
Math Dataset
\begin{document} \title{Hamiltonian Simulation by Uniform Spectral Amplification} \author{ \normalsize Guang Hao Low\thanks{Department of Physics, Massachusetts Institute of Technology \texttt{\{[email protected]\}}},\quad \normalsize Isaac L. Chuang\thanks{Department of Electrical Engineering and Computer Science, Department of Physics, Research Laboratory of Electronics, Massachusetts Institute of Technology \texttt{\{[email protected]\}}} \date{\today} } \maketitle \begin{abstract} The exponential speedups promised by Hamiltonian simulation on a quantum computer depends crucially on structure in both the Hamiltonian $\hat{H}$, and the quantum circuit $\hat{U}$ that encodes its description. In the quest to better approximate time-evolution $e^{-i\hat{H}t}$ with error $\epsilon$, we motivate a systematic approach to understanding and exploiting structure, in a setting where Hamiltonians are encoded as measurement operators of unitary circuits $\hat{U}$ for generalized measurement. This allows us to define a \emph{uniform spectral amplification} problem on this framework for expanding the spectrum of encoded Hamiltonian with exponentially small distortion. We present general solutions to uniform spectral amplification in a hierarchy where factoring $\hat{U}$ into $n=1,2,3$ unitary oracles represents increasing structural knowledge of the encoding. Combined with structural knowledge of the Hamiltonian, specializing these results allow us simulate time-evolution by $d$-sparse Hamiltonians using $\mathcal{O}\left(t(d \|\hat H\|_{\text{max}}\|\hat H\|_{1})^{1/2}\log{(t\|\hat{H}\|/\epsilon)}\right)$ queries, where $\|\hat H\|\le \|\hat H\|_1\le d\|\hat H\|_{\text{max}}$. Up to logarithmic factors, this is a polynomial improvement upon prior art using $\mathcal{O}\left(td\|\hat H\|_{\text{max}}+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\right)$ or $\mathcal{O}(t^{3/2}(d \|\hat H\|_{\text{max}}\|\hat H\|_{1}\|\hat H\|/\epsilon)^{1/2})$ queries. In the process, we also prove a matching lower bound of $\Omega(t(d\|\hat H\|_{\text{max}}\|\hat H\|_{1})^{1/2})$ queries, present a distortion-free generalization of spectral gap amplification, and an amplitude amplification algorithm that performs multiplication on unknown state amplitudes. \end{abstract} \tableofcontents \section{Introduction} \label{Sec:Introduction} Quantum algorithms for matrix operations on quantum computers are one of its most exciting applications. In the best cases, they promise exponential speedups over classical approaches for problems such as matrix inversion~\cite{Harrow2009} and Hamiltonian simulation, which is matrix exponentiation. Intuitively, any arbitrary unitary matrix applied to an $q$-qubit quantum state is `exponentially fast' due to a state space of dimension $n=2^q$. However, if these matrix elements are presented as a classical list of $\mathcal{O}(n^2)$ numbers, simply encoding the data into a quantum circuit already takes exponential time. Thus the extent of this speedup is sensitive to both the properties of the Hamiltonian and the input model defining how that information is made accessible to a quantum computer. Broad classes of Hamiltonians $\hat{H}$, structured so as to enable this exponential speedup, are well-known. The most-studied examples include local Hamiltonians~\cite{Lloyd1996universal} built from a sum of terms each acting on a constant number of qubits, and its generalization as $d$-sparse matrices~\cite{Aharonov2003Adiabatic} with at most $d$ non-zero entries in every row, whose values and positions must all be efficiently computable. More recent innovations consider matrices that are a linear combination of unitaries~\cite{Childs2012,Berry2015Truncated,Novo2016improved} or density matrices~\cite{LloydMohseniRebentrost2014,Kimmel2017}. Though different classes define different input models, that is unitary quantum oracles that encode $\hat{H}$, it is still helpful to quantify the cost of various quantum matrix algorithms through the query complexity, which in turn depends on various structural descriptors of $\hat{H}$, such as, but not limited to, its spectral norm $\|\hat{H}\|$, induced $1$-norm $\|\hat{H}\|_{1}$, max-norm $\|\hat{H}\|_{\text{max}}$, rank, or sparsity. A challenging open problem is how knowledge of any structure may be maximally exploited to accelerate quantum algorithms. As the time-evolution operator $e^{-i\hat{H}t}$ underlies numerous such quantum algorithms, one common benchmark is the Hamiltonian simulation problem of converting this description of $\hat{H}$ into a quantum circuit that approximates $e^{-i\hat{H}t}$ for time $t$ with some error $\epsilon$. To illustrate, we recently provided an algorithm with optimal query complexity $\mathcal{O}\big(td\|\hat{H}\|_{\text{max}}+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$~\cite{Low2016HamSim} in all parameters for sparse matrices~\cite{Childs2010,Berry2014,Berry2015Hamiltonian}, based on \emph{quantum signal processing} techniques~\cite{Low2016methodology}. Though this settles the worst-case situation where only $d$ and the max-norm $ \|\hat{H}\|_{\text{max}}$ are known in advance, there exist algorithms that exploit additional knowledge of the spectral norm $\|\hat{H}\|$ and induced-one norm $\|\hat{H}\|_{1}$ to achieve simulation with $\mathcal{O}(t^{3/2}(d\|\hat{H}\|_{\text{max}}\|\hat{H}\|_{1}\|\hat{H}\|\frac{1}{\epsilon})^{1/2})$~\cite{Berry2012} queries. Though this square-root scaling in sparsity alone is optimal, it is currently unknown whether the significant penalty is paid in the time and error scaling is unavoidable. Motivated by the inequalities $\|\hat{H}\|_{\text{max}}\le \|\hat{H}\|\le \|\hat{H}\|_{1}\le d \|\hat{H}\|_{\text{max}}$~\cite{Childs2010Limitation}, one could hope for a best-case algorithm in Claim~\ref{Claim:Sparse_Ham_Sim} that interpolates between these possibilities. \begin{claim}[Sparse Hamiltonian simulation] \label{Claim:Sparse_Ham_Sim} Given the standard quantum oracles that return values of $d$-sparse matrix elements of the Hamiltonian $\hat{H}$, there exists a quantum circuit that approximates time-evolution $e^{-i\hat{H}t}$ with error $\epsilon$ using $Q=\mathcal{O}\big(t(d\|\hat{H}\|_{\text{max}}\|\hat{H}\|_1)^{1/2}+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$ queries and $\mathcal{O}(Q\log{(n)})$ single and two-qubit quantum gates. \end{claim} The challenge is exacerbated by how unitary time-evolution, though a natural consequence of Schr{\"o}dinger's equation in \emph{continuous}-time, is not natural to the gate model of \emph{discrete}-time quantum computation. In some cases, such as quantum matrix inversion~\cite{Kothari2014efficient}, algorithms that are more efficient as well as considerably simpler in both execution and concept can be obtained by creatively bypassing Hamiltonian simulation as an intermediate step. The need to disentangle the problem of exploiting structure from that of finding best simulation algorithms is highlighted by celebrated Hamiltonian simulation techniques such Lie-Product formulas~\cite{Lloyd1996universal}, quantum walks~\cite{Childs2010}, and truncated-Taylor series~\cite{Berry2015Truncated}, each radically different and specialized to some class of structured matrices. A unifying approach to exploiting the structure of Hamiltonians, independent of any specific quantum algorithm, is hinted at by recent results on Hamiltonian simulation by \emph{qubitization}~\cite{Low2016hamiltonian}. There, we focus on a \emph{standard-form} encoding of matrices (Def.~\ref{Def:Standard_Form}), which, in addition to generalizing a number of prior input models, also appears more natural. On measurement outcome $\ket{0}_a$ with best-case success probability $(\|\hat{H}\|/\alpha)^2\le 1$, a Hermitian measurement operator $\hat{H}/\alpha$ is applied on the system -- thus the standard-form is no more or less than the fundamental steps of generalized measurement~\cite{Nielsen2004}. Treating this quantum circuit as a unitary oracle, this amounts possessing no structural information whatsoever about $\hat{H}$. In this situation, we provided an optimal simulation algorithm (Thm.~\ref{Thm:Ham_Sim_Qubitization}), notably with only $\mathcal{O}(1)$ ancilla overhead. \begin{restatable}[Standard-form matrix encoding]{definition}{StandardForm} \label{Def:Standard_Form} A matrix $\hat{H} \in \mathbb {C}^{n\times n}$ acting on the system register $s$ is encoded in standard-form-$(\hat{H},\alpha,\hat{U},d)$ with normalization $\alpha \ge \|\hat{H}\|$ by the computational basis state $\ket{0}_a \in \mathbb {C}^d$ on the ancilla register $a$ and signal unitary $\hat{U} \in \mathbb {C}^{d n \times dn}$ if $(\bra{0}_a\otimes \hat{I}_s)\hat{U}(\ket{0}_a\otimes \hat{I}_s)=\hat{H}/\alpha$.\footnote{The unitary $\hat{G}$ defined in~\cite{Low2016hamiltonian} such that $((\bra{0}\hat{G}^\dag)\otimes \hat{I})\hat{U}((\hat{G}\ket{0})\otimes \hat{I})=\hat{H}/\alpha$, which encodes $\hat{H}$ with normalization $\alpha$, may be absorbed into a redefinition of $\hat{U}$. Moreover, for any $\beta > 0$, this is identical to encoding $\hat{H}\beta$ with normalization $\alpha\beta$. } If $\hat{H}$ is also Hermitian, this is called a Herimitian standard-form encoding. \end{restatable} \begin{theorem}[Hamiltonian simulation by qubitization Thm.~1~\cite{Low2016hamiltonian}] \label{Thm:Ham_Sim_Qubitization} Given Hermitian standard-form-$(\hat{H},\alpha,\hat{U},d)$, there exists a standard-form-$(\hat{X},1,\hat{V},4d)$ such that $\|\hat{X}-e^{-i\hat{H}t}\|\le\epsilon$, where $\hat{V}$ requires $Q=\mathcal{O}\big(t\alpha +\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$ queries to controlled-$\hat{U}$ and $\mathcal{O}(Q\log{(d)})$ primitive gates\footnote{As error $\epsilon$ occurs only in logarithms, it may refer to the trace distance, failure probability, or any other polynomially-related distance without affecting the complexity scaling.}. \end{theorem} This motivates the standard-form encoding as the appropriate endpoint when structural information about $\hat{H}$ is provided, though it does not exclude the possibility of superior simulation algorithms not based on the standard-form. As Thm.~\ref{Thm:Ham_Sim_Qubitization} is the optimal simulation algorithm, any exploitation of structure should manifest in minimizing the normalization $\alpha$ of a Hamiltonian encoded in Def.~\ref{Def:Standard_Form}. In order to avoid accumulating polynomial factors of errors, this must only be with an exponentially small distortion to its spectrum. Moreover, the cost of the procedure should allow for a favorable trade-off in the query complexity of Hamiltonian simulation. Thus manipulation of the standard-form and any additional structural information to this end is what we call the \emph{uniform spectral amplification} problem. \begin{problem}[Uniform spectral amplification] \label{Problem:Uniform_Spectral_Amplification} Given Hermitian standard-form-$(\hat{H},\alpha,\hat{U},d)$, and an upper bound $\Lambda \in[ \|\hat{H}\|,\alpha]$ on the spectral norm, exploit any additional information about $\hat{H}$ or the signal unitary $\hat{U}$ to construct a $Q$-query quantum circuit that encodes $\hat{H}_{\text{amp}}$ in standard-form with normalization $\Lambda$, such that $\|\hat{H}_{\text{amp}}-\hat{H}\|\le \epsilon$, and $Q=o(\alpha/\Lambda)\cdot \mathcal{O}(\text{polylog}(1/\epsilon))$. \end{problem} Uniform spectral amplification is non-trivial as it precludes a number of standard techniques. First, amplitude amplification is precluded as the success probability must be boosted for \emph{all} input states to the system. Second, oblivious amplitude amplification~\cite{Berry2014,Berry2015Truncated} is also precluded as $\hat{H}$ is not in general unitary, or even close to unitary. Third, spectral gap amplification~\cite{Somma2013SpectralGap} is precluded as it distorts the spectrum. As such, solving this problem would be of broad interest beyond Hamiltonian simulation. For instance, spectral gap amplification is fundamental to adiabatic state preparation and understanding properties of condensed matter system. Moreover, the prevalence of generalized measurements means that this could also be applicable to quantum observable estimation in metrology and repeat-until-success gate synthesis~\cite{Paetznick2014}. Some forms of spectral gap amplification have an underlying structure that resembles the amplitude amplification algorithm for quantum state preparation. This suggests that at least one possible solution to uniform spectral amplification could be obtained by solving a related non-trivial \emph{amplitude multiplication} problem, and vice-versa. \begin{problem}[Amplitude multiplication] \label{Problem:Uniform_Amplitude_Amplification} Given a quantum state preparation oracle $\hat{G}\ket{0}_a\ket{0}_b=\lambda \ket{t}_a\ket{0}_b+\sqrt{1-\lambda^2} \ket{t^\perp}_{ab}$, and an upper bound $\Gamma \in [\lambda ,1]$ on the target state overlap, construct a $Q$-query quantum circuit $\hat{V}$ that prepares $\hat{V}\ket{0}_a\ket{0}_b=\lambda_{\text{amp}} \ket{t}_a\ket{0}_b+\cdots \ket{t^\perp}_{ab}$ such that $|\lambda_{amp}-\lambda/\Gamma|\le \epsilon$, and $Q=\mathcal{O}(\Gamma^{-1}\log{(1/\epsilon)})$. \end{problem} Amplitude multiplication is particularly interesting as amplitude amplification and its many other variations~\cite{Yoder2014} amplify target states with the same optimal scaling $\mathcal{O}(\Lambda^{-1})$, but with a highly non-linear dependence on the initial overlap. In contrast, Problem~\ref{Problem:Uniform_Amplitude_Amplification} performs arithmetic multiplication on the amplitudes with exponentially small error, notably \emph{independent of, and without any prior knowledge of their values}. \subsection{Our Results} We present quantum algorithms for Hamiltonian simulation based on the general principle of finding solutions to the uniform spectral amplification Problem~\ref{Problem:Uniform_Spectral_Amplification}, which may be broadly categorized as follows. In `uniform spectral amplification by quantum signal processing', we make no assumptions on the form of the signal unitary in the standard-form encoding of $\hat{H}$, and thus treat as a single unitary oracle. In `uniform spectral amplification by amplitude multiplication', we assume that signal unitary has the structure of factoring into two or three unitary oracles, and by solving amplitude multiplication in Problem~\ref{Problem:Uniform_Amplitude_Amplification}, also approach the sparse simulation results of Claim.~\ref{Claim:Sparse_Ham_Sim}. We then provide a unifying perspective in `universality of the standard-form' which further motivates the standard-form encoding of Hamiltonians as a fundamental ingredient in quantum computation. In greater detail, these results are as follows. \subsubsection{Uniform Spectral Amplification by Quantum Signal Processing} \label{Sec:Branch_QSP} If we make no assumptions on the form of the signal unitary $\hat{U}$ that realizes the standard-form encoding, we treat $\hat{U}$ as a black-box oracle, which we call the standard-form oracle. In this situation, the first result is uniform spectral amplification in Thm.~\ref{Cor:Operator_Amplification} that reduces the normalization $\alpha$ of encoded Hamiltonians to $\mathcal{O}(\Lambda)$ using $\mathcal{O}(\alpha\Lambda^{-1}\log(1/\epsilon))$ queries. This produces a quadratic improvement in success probability when the standard-form is applied to perform quantum measurement, but serves no advantage to Hamiltonian simulation. \begin{theorem}[Uniform spectral amplification by spectral multiplication] \label{Cor:Operator_Amplification} Given Hermitian standard-form-$(\hat{H},\alpha,\hat{U},d)$, let $\Lambda\in[\|\hat{H}\|,\alpha]$. Then for any $\epsilon\le \mathcal{O}(\Lambda/\alpha)$, there exists a standard-form-$(\hat{H}_{\text{amp}},2\Lambda,\hat{V},4d)$ such that $\frac{1}{2\Lambda}\|\hat{H}_{\text{amp}}-\hat{H}\| \le \epsilon$, and $\hat{V}$ requires $\mathcal{O}(\alpha\Lambda^{-1}\log{(1/\epsilon)})$ queries to controlled-$\hat{U}$. \end{theorem} The second result is uniform spectral amplification of only the low-energy subspace in Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification}, of $\hat{H}$ with eigenvalues $\in[-\alpha,-\alpha(1-\Delta)]$, which is of interest to quantum chemistry and adiabatic computation. There, the effective normalization is reduced to $\mathcal{O}(1)$ using $\mathcal{O}(\Delta^{-1/2}\log^{3/2}{(\frac{1}{\Delta\epsilon})})$ queries. This is a generalization of spectral gap amplification~\cite{Somma2013SpectralGap} with the distinction of preserving the relative energy spacing of all relevant states, and of applying to any Hamiltonian encoded in standard-form. When applied to Hamiltonian simulation, an acceleration to $\mathcal{O}\big(t\alpha\sqrt{ \Delta}\log^{3/2}{(t\alpha/\epsilon)}\big)$ queries is obtained in Cor.~\ref{Cor:Ham_Sim_Spectral_Amplification}. \begin{theorem}[Uniform spectral amplification of low-energy subspaces] \label{Thm:Ham_Encoding_Uniform_Amplification} Given Hermitian standard-form-$(\hat{H},\alpha,\hat{U},d)$ with eigenstates $\hat{H}/\alpha\ket{\lambda}=\lambda\ket{\lambda}$, let $\Delta \in(0,1)$ be a positive constant, and $\hat{\Pi}=\sum_{\lambda \in[-1,-1+\Delta]}\ket{\lambda}\bra{\lambda}$ be a projector onto the low-energy subspace of $\hat{H}$. Then there exists a standard-form-$(\hat{H}_{\text{amp}},\Delta\alpha,\hat{V},4d)$ such that $\|\hat{\Pi}(\frac{\hat{H}_{\text{amp}}}{\Delta\alpha}-\frac{\hat{H}+\alpha\hat{I}(1-\Delta)}{\Delta\alpha})\hat{\Pi}\|\le \epsilon$, and $\hat{V}$ requires $\mathcal{O}(\Delta^{-1/2}\log^{3/2}{(\frac{1}{\Delta\epsilon})})$ queries to controlled-$\hat{U}$. \end{theorem} These results stem primarily from constructing polynomials with desirable properties, which we implement using the technique of Thm.~\ref{Thm:QSP_B}. This flexible variant of quantum signal processing is subject to fewer constraints than in prior art. Moreover, the advantage of quantum signal processing over the related technique of linear-combination-of-unitaries~\cite{Berry2015Hamiltonian} is its avoidance of Hamiltonian simulation as an intermediate step. This reduces overhead in space, query complexity, and error, and leads to an extremely simple algorithm that directly implements polynomial functions of $\hat{H}$ without any approximation. \begin{theorem}[Flexible quantum signal processing] \label{Thm:QSP_B} Given Hermitian standard-form-$(\hat{H},1,\hat{U},d)$, let $B$ be any function that satisfies the all the following conditions: \\ (1) ${B}(x)=\sum^N_{j=0}b_j x^j$ is a real parity-$(N \mod 2)$ polynomial of degree at most $N$; \\ (2) $B(0)=0$; \\ (3) $\forall x\in[-1,1]$, $B^2(x)\le 1$. \\ Then there exists a Hermitian standard-form-$(B[\hat{H}],1,\hat{V},4d)$, where $B[\hat{H}]=\sum^N_{j=0}b_j \hat{H}^j$, and $\hat{V}$ requires $\mathcal{O}(N)$ queries to controlled-$\hat{U}$ and $\mathcal{O}(N\log(d))$ primitive quantum gates pre-computed in classical $\mathcal{O}(\text{poly}(N))$ time. \end{theorem} \subsubsection{Uniform Spectral Amplification by Amplitude Multiplication} \label{Sec:Branch_AM} Alternatively, we here assume that the signal unitary $\hat{U}$ that realizes the standard-form encoding factors into two or three unitary quantum oracles $\hat{U}_\text{row}$, $\hat{U}_\text{col}$, and $\hat{U}_{\text{mix}}$, which we also call standard-form oracles. When the signal unitary factors into two components $\hat{U}=\hat{U}_\text{row}^\dag\hat{U}_\text{col}$, this constrains the representation of matrices in the standard-form to have matrix elements of $\hat{H}$ that are exactly the overlap of appropriately defined quantum states, and generalizes the sparse matrix model first introduced by Childs~\cite{Childs2010} for quantum walks. When the signal unitary factors into three $\hat{U}=\hat{U}_\text{row}^\dag\hat{U}_{\text{mix}}\hat{U}_\text{col}$ components, amplitude amplification can be applied to obtain non-trivial Hamiltonians. Note that amplitude amplification had been previously considered in the context of sparse Hamiltonian simulation~\cite{Berry2012}. However, its non-linearity introduced a polynomial dependence on error, which compounded into a polynomial overhead in scaling with respect to time and error. In constrast, our solution to the amplitude multiplication problem Problem~\ref{Problem:Uniform_Amplitude_Amplification} achieves uniform spectral amplification by multiplying all state overlaps by the same constant factor. Specializing the general result Lem.~\ref{Thm:Ham_Encoding_Uniform_Amplification_State_Overlaps} to the case of sparse Hamiltonians, which are described by standard black-box quantum oracles (Def.~\ref{Def:Sparse_Oracle}) to its non-zero matrix elements and positions, furnishes a simulation algorithm matching the complexity of Claim.~\ref{Claim:Sparse_Ham_Sim}, up to logarithmic factors. Modulo these logarithmic factors, this an improvement over prior art, with either best-case square-root improvement in sparsity~\cite{Low2016HamSim}, or a polynomial improvement in time and exponential improvement in precision~\cite{Berry2012} \begin{definition}[Sparse matrix oracles~\cite{Berry2012}] \label{Def:Sparse_Oracle} Sparse matrices with at most $d$ non-zero elements in every row are specified by two oracles. The oracle $\hat{O}_{H}\ket{j}\ket{k}\ket{z}=\ket{j}\ket{k}\ket{z\oplus \hat{H}_{jk}}$ queried by $j\in[n]$ row and $k\in[n]$ column indices returns the value $\hat{H}_{jk}=\bra{j}\hat{H}\ket{k}$, with maximum absolute value $\|\hat{H}\|_{\text{max}}=\max_{jk}{|\hat{H}_{jk}|}$. The oracle $\hat{O}_{F}\ket{j}\ket{l}=\ket{j}\ket{f(j,l)}$ queried by $j\in[n]$ row and $l\in[d]$ column indices computes in-place the column index $f(j,l)$ of the $l^{\text{th}}$ non-zero entry of the $j^{\text{th}}$ row. \end{definition} \begin{theorem}[Sparse Hamiltonian simulation by amplified state overlap] \label{Cor:Ham_Sim_Sparse_Amplified} Given the $d$-sparse matrix oracles in Def.~\ref{Def:Sparse_Oracle} for the Hamiltonian $\hat{H}$, let $\|\hat{H}\|_{\text{max}}=\max_{jk}|\hat{H}_{jk}|$ be the max-norm, $\|\hat{H}\|_1=\max_{j}\sum_{k}|\hat{H}_{jk}|$ be the induced $1$-norm , and $\|\hat{H}\|$ be spectral norm. Then $\forall t\ge 0,\; \epsilon >0$, the operator $e^{-i\hat{H}t}$ can be approximated with error $\epsilon$ using $\mathcal{O}\left(t(d\|\hat{H}\|_{\text{max}}\|\hat{H}\|_1)^{1/2}\log{(\frac{t\|\hat{H}\|}{\epsilon})}\left(1+\frac{1}{t\|\hat{H}\|_1}\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\right)\right)$ queries. \end{theorem} Observe that in the asymptotic limit of large $\|\hat{H}\|_1 t \gg \log{(1/\epsilon)}$, the query complexity simplifies to $\mathcal{O}\Big(t(d\|\hat{H}\|_{\text{max}}\|\hat{H}\|_1)^{1/2}\log{(\frac{t\|\hat{H}\|}{\epsilon})}\Big)$. The algorithm of Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified} is particularly flexible. If none of the above norms are known, they may be replaced by any upper bound, such as determined by the inequalities $\|\hat{H}\|_{\text{max}}\le \|\hat{H}\|\le \|\hat{H}\|_{1}\le d \|\hat{H}\|_{\text{max}}$~\cite{Childs2010Limitation}. Even in the worst case, the results are similar to previous optimal simulation algorithms. Moreover, the scaling in these parameters is optimal as we prove matching lower bound Thm.~\ref{Thm:Lower_Bound} by finding a Hamiltonian that solves $\text{PARITY}\circ\text{OR}$. \begin{theorem} \label{Thm:Lower_Bound} For any $d\ge 1$, $s\ge 1$, and $t>0$, there exists a Hamiltonian $\hat{H}$ with sparsity $\Theta(d)$, $\|\hat{H}\|_{\text{max}}=\Theta(1)$, and $\|\hat{H}\|_1 = \Theta(s)$, such that approximating time evolution $e^{-i\hat{H}t}$ with constant error requires $\Omega(t\sqrt{d s})$ queries. \end{theorem} Some of these results stem from constructing polynomials with desirable properties, which we implement using the technique of Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification}. The existence of a weaker version of this amplitude amplification algorithm was suggested in our prior work~\cite{Low2016methodology}. Here, we present that and go further. This variant of amplitude amplification allows one to amplify target state overlaps with almost arbitrary polynomials functions. \begin{theorem}[Flexible amplitude amplification] \label{Thm:Controlled_Generalized_Amplitude_Amplification} Given a state preparation unitary $\hat{G}$ acting on the computational basis states $\ket{0}_a\in \mathbb{C}^d$, $\ket{0}_b\in \mathbb{C}^2$ such that $\hat{G}\ket{0}_a\ket{0}_b=\lambda\ket{t}_a\ket{0}_b+\sqrt{1-\lambda^2}\ket{t^\perp}_{ab}$, where $\ket{t^\perp}_{ab}$ has no support on $\ket{0}_b$, let $D$ be any function that satisfies all the following conditions: \\ (1) $D$ is an odd real polynomial in $\lambda$ of degree at most $2N+1$; \\ (2) $\forall \lambda\in[-1,1]$, ${D}^2(\lambda)\le 1$. \\ Then there exists a quantum circuit $\hat{W}_{\vec\phi}$ such that $\bra{t}_a\bra{0}_b\bra{0}_c\hat{W}_{\vec\phi}\ket{0}_a\ket{0}_b\ket{0}_c=D(\lambda)$, using $N+1$ queries to $\hat{G}$, $N$ queries to $\hat{G}^\dag$, $\mathcal{O}(N\log{(d)})$ primitive quantum gates pre-computed from $D$ in classical $\mathcal{O}(\text{poly}(N))$ time, and an additional qubit ancilla $c$, such that \end{theorem} Amplitude multiplication in Thm.~\ref{Thm:Linear_Amplitude_Amplification} is then a special case that solves Problem~\ref{Problem:Uniform_Amplitude_Amplification} up to a factor of $\frac{1}{2}$ in the range of the input and output amplitudes. \begin{theorem}[Amplitude multiplication algorithm] \label{Thm:Linear_Amplitude_Amplification} $\forall\;\lambda \in [-1/2,1/2]$, $\Gamma \in (|\lambda|, 1/2]$, $\epsilon \le \mathcal{O}(\Gamma)$, let $\hat{G}$ be a state preparation unitary acting on the computational basis states $\ket{0}_a\in \mathbb{C}^d$, $\ket{0}_b\in \mathbb{C}^2$ such that $\hat{G}\ket{0}_a\ket{0}_b=\lambda\ket{t}_a\ket{0}_b+\sqrt{1-\lambda^2}\ket{t^\perp}_{ab}$, where $\ket{t^\perp}_{ab}$ has no support on $\ket{0}_b$. Then there exists a quantum circuit $\hat{G}'$ such that $\left|\bra{t}_a\bra{0}_b\bra{0}_c\hat{G}'\ket{0}_a\ket{0}_b\ket{0}_c- \frac{\lambda}{2\Gamma}\right|\le \frac{|\lambda|}{2\Gamma}\epsilon$, using $Q=\mathcal{O}(\Gamma^{-1}\log{(1/\epsilon)})$ queries to $\hat{G},\hat{G}^\dag$, $\mathcal{O}(Q\log{(d)})$ primitive quantum gates, and an additional ancilla qubit $c$. \end{theorem} \subsubsection{Universality of the Standard-Form} \label{Sec:Branch_UNI} Uniform spectral amplification is motivated by the idea that structure in the signal unitary and its encoded Hamiltonian can be fully exploited by focusing only on the manipulating the standard-form, independent of any later application such as Hamiltonian simulation. This is supported by the simulation algorithm Thm.~\ref{Thm:Ham_Sim_Qubitization} which is optimal with respect to all parameters when the standard-form is provided as a black-box oracle. This perspective would be further justified if one could rule out, to a reasonable extent, the existence of superior simulation algorithms not based on the standard-form. We show certain universality of the standard-form by proving an equivalence between quantum circuits for simulation and those for quantum measurement, up to a logarithmic overhead in time and a constant overhead in space. Where Thm.~\ref{Thm:Ham_Sim_Qubitization} transforms a measurement of $\hat{H}$ to time-evolution by $e^{-i \hat{H}t}$, we prove the converse in Thm.~\ref{Thm:Standard_Form_From_Ham_Sim} which transforms time-evolution $e^{-i \hat{H}t}$ back into measurement $\hat{H}$. In particular, this is with an exponential improvement in precision over standard techniques based on quantum phase estimation. Thus any non-standard-form simulation algorithm for $e^{-i\hat{H}t}$ that exploits structure can be always mapped in this manner onto the standard-form with a small overhead. \begin{theorem}[Standard-form encoding by Hamiltonian simulation] \label{Thm:Standard_Form_From_Ham_Sim} Given oracle access to the controlled time-evolution $e^{-i\hat{H}}$ such that $\|\hat{H}\|\le 1/2$, there exists a standard-form-$(\hat{H}_{\text{lin}},1,\hat{U},4)$ such that $\|\hat{H}_{\text{lin}}-\hat{H}\| \le \epsilon$, where $\hat{U}$ requires $Q=\mathcal{O}\left(\log{(1/\epsilon)}\right)$ queries and $\mathcal{O}(Q)$ primitive quantum gates. \end{theorem} This is proven through the flexible quantum signal processing Thm.~\ref{Thm:QSP_B} using a particular choice of polynomial. It is important to note however the caveat that our equivalence limits $\|\hat{H}t\| = \mathcal{O}(1)$, and also fails when time-evolution can be approximated with $o(t)$ queries. Fortunately, the latter scenario can be disregarded with limited loss as `no-fast-forwarding' theorems~\cite{Childs2010Limitation} prove the necessity of $\Omega(\|\hat{H}\|t)$ queries for generic computational problems and physical systems. One useful application of this reverse direction is an alternate technique Cor.~\ref{Cor:HamExponentials} for simulating time evolution by a sum of $d$ Hermitian components $\sum_{d=1}\hat{H}_j$, given their controlled-exponentials $e^{-i\hat{H}_jt_j}$. This approach is considerably simpler than that of compressed fractional queries~\cite{Berry2014}, and essentially works by using Thm.~\ref{Thm:Standard_Form_From_Ham_Sim} to map each $e^{-i\hat{H}_jt_j}$, where $\|\hat{H}_jt_j\|=\mathcal{O}(1)$ to a standard-form encoding of $\hat{H}_jt_j$. \begin{corollary}[Hamiltonian simulation with exponentials] \label{Cor:HamExponentials} Given standard-form-$(\sum^d_{j=1}\alpha_je^{-i\hat{H}_j},\alpha,\hat{G}^\dag_a\hat{U}\hat{G}_a,d)$, where $\hat{G}$ that prepares $\ket{G}_a=\sum^d_{j=1}\sqrt{\alpha_j/\alpha}\ket{j}_a$ with $\alpha_j\ge 0$, normalization $\alpha=\sum^d_{j=1}\alpha_j$ and signal oracle $\hat{U}=\sum_{j=i}^d\ket{j}\bra{j}_a\otimes e^{-i \hat{H}_j}$, with $\|\hat{H}_j\|\le 1$, there exists a standard-form-$(\hat{X},1,\hat{V},4d)$ such that $\|\hat{X}-e^{-i\hat{H}t}\|\le\epsilon$, where $\hat{V}$ requires $\mathcal{O}\left(\alpha t \log{(\alpha t/\epsilon)}+\frac{\log{(1/\epsilon)}\log{(\alpha t/\epsilon)}}{\log\log{(\alpha t/\epsilon)}}\right)$ controlled-queries, and $\mathcal{O}(Q\log{(d)})$ primitive quantum gates. \end{corollary} \subsection{Organization} The dependencies of our results are summarized in Figure~\ref{Fig:Dependencies}. \begin{itemize} \item [Part I] is where where we achieve uniform spectral amplification by quantum signal processing. We describe in Sec.~\ref{Sec:Standard-form_QSP} the technique of quantum signal processing in prior art and prove the more useful variant Thm.~\ref{Thm:QSP_B}. This applied in Sec.~\ref{Sec:Uniform_Hamiltonian_Amplification}, where we treat the signal unitary as a single unitary oracle, and prove the solutions Thm.~\ref{Cor:Operator_Amplification} and Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification} to the uniform spectral amplification problem. \item [Part II] is where we achieve uniform spectral amplification by amplitude multiplication. We prove in Sec.~\ref{Sec:AA_by_QSP} a generalization of amplitude amplification in Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification}, which is applied to obtain the amplitude multiplication algorithm of Thm.~\ref{Thm:Linear_Amplitude_Amplification}. Subsequently in Sec.~\ref{Sec:Ham_Sim_Overlaps}, we consider signal unitaries that factors into two or three unitary oracles. This motivates a general model of Hamiltonians encoded by state overlaps, where uniform spectral amplification in Lem.~\ref{Thm:Ham_Encoding_Uniform_Amplification_State_Overlaps} is enabled by amplitude multiplication. Applying these results to the special case of sparse matrices leads to the simulation algorithm Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified}, which matches the lower bound Thm.~\ref{Thm:Lower_Bound}. \item [Part III] in Sec.~\ref{Sec:Equivalence_Sim_Mea} is where we offer a unifying perspective of simulation algorithms and prove a certain universality of the standard-form. This is through the equivalence between quantum circuits for simulation and those for measurement described by Thm.~\ref{Thm:Standard_Form_From_Ham_Sim}, and leads to the simulation algorithm Cor.~\ref{Cor:HamExponentials}. \end{itemize} We conclude in Sec.~\ref{Sec:Amp_concluson}. \begin{figure} \caption{Dependencies of new results.} \label{Fig:Dependencies} \end{figure} \section{Quantum Signal Processing Techniques} \label{Sec:Standard-form_QSP} Quantum signal processing is a very new technique~\cite{Low2016HamSim,Low2016hamiltonian}, based on optimal quantum control~\cite{Low2016methodology} and qubitization~\cite{Low2016hamiltonian}, for implementing polynomial functions of the Hamiltonian $\hat{H}$ given its standard-form encoding. This is performed with optimal query complexity, $\mathcal{O}(1)$ ancilla overhead, and without approximation. We outline in Sec.~\ref{Sec:Standard-form_QSP_Prior_art} the basic version Lem.~\ref{Thm:QSP_AB} that was introduced in~\cite{Low2016hamiltonian}, which imposes certain unintuitive constraints on valid polynomials. Subsequently, we prove in Sec.~\ref{Sec:QSP_Constraint_Free} its generalization Thm.~\ref{Thm:QSP_B} that drops these constraints, and is applied frequently to obtain our other results. \subsection{Quantum Signal Processing in Prior Art} \label{Sec:Standard-form_QSP_Prior_art} Given any Hermitian matrix encoded in the standard-form-$(\hat{H},\alpha,\hat{U},d)$ of Def.~\ref{Def:Standard_Form}, let Hermitian $\hat{H}\in \mathbb{C}^{n\times n}:\mathcal{H}_s\rightarrow \mathcal{H}_s$ act on the system Hilbert space $\mathcal{H}_s$ of dimension $n$. Then the signal unitary $\hat{U}\in\mathbb{C}^{nd\times nd}:\mathcal{H}_s\otimes \mathcal{H}_a\rightarrow \mathcal{H}_s\otimes \mathcal{H}_a$ acts jointly on the system $\mathcal{H}_s$ and dimension $d$ ancilla $\mathcal{H}_a$ register. Using the computational basis state $\ket{0}_a$, $(\bra{0}_a\otimes \hat{I}_s)\hat{U}(\ket{0}_a\otimes \hat{I}_s)=\hat{H}/\alpha$ with normalization $\alpha \ge \|\hat{H}\|$. Note that in~\cite{Low2016hamiltonian}, a different measurement basis $\ket{G}_a=\hat{G}\ket{0}_a\in \mathcal{H}_a$ is used to encode $(\bra{G}_a\otimes \hat{I}_s)\hat{U}(\ket{G}_a\otimes \hat{I}_s)=\hat{H}/\alpha$ as some structured Hamiltonians are more naturally represented that way. Assuming oracle access to the state preparation unitary $\hat{G}$, this is entirely equivalent as we may always absorb $\hat{G}$ into a redefinition $\hat{G}^\dag\hat{U}\hat{G}$ of the signal unitary. In Sec.~\ref{Sec:Standard-form_QSP} and Sec.~\ref{Sec:Equivalence_Sim_Mea} only, we find it useful to have $\hat{G}$ explicit, and also absorb the normalization into a rescaled Hamiltonian $\hat{H}'=\hat{H}/\alpha$ with eigenstate $\hat{H}'\ket{\lambda}=\lambda\ket{\lambda}$ and spectral norm $\|\hat{H}'\|\le 1$. Quantum signal processing~\cite{Low2016HamSim} characterizes the query complexity of implementing large classes of functions $f[\hat{H}']\doteq\sum_\lambda f(\lambda)\ket{\lambda}\bra{\lambda}$. Using $\mathcal{O}(N)$ standard-form queries, $\mathcal{O}(N\log{(d)})$ primitive quantum gates, and at most $1$ additional ancilla qubit $b$, one can construct a useful quantum circuit $\hat{W}_{\vec{\phi}}$, the \emph{composite qubiterate} depicted in Fig.~\ref{Fig:Circuit_Qubitization_QSP}, that is parameterized by $ \vec{\phi}\in \mathbb{R}^N$ and an ancilla state $\ket{0}_{ab}$. The gate cost of reflections about the $2d$ dimensional state $\ket{0}_a\ket{0}_b$ depends on the $2^{\mathcal{O}(\log_2 d)}$-controlled Toffoli gate. A $\mathcal{O}(\log(d))$ primitive gate decomposition is provided in~\cite{He2017decompositions} using any one other uninitialized ancilla qubit, which we may take from register $s$. For each eigenstate $\ket{\lambda}_s$, $\hat{V}_{\vec{\phi}}$ has the following properties: \begin{align} \label{Eq:QSP_Baseline} \hat{W}_{\vec{\phi}}\ket{0}_{ab}\ket{\lambda}_s&=e^{-i \hat{\sigma}_{\phi_{N}}\theta_\lambda}e^{-i \hat{\sigma}_{\phi_{N-1}}\theta_\lambda}\cdots e^{-i \hat{\sigma}_{\phi_{1}}\theta_\lambda}\ket{0}_{ab}\ket{\lambda}_s, \\\nonumber &= \left(\mathcal A(\theta_\lambda)\hat{I}_\lambda+i\mathcal B(\theta_\lambda))\hat\sigma_{z,\lambda} + i\mathcal C(\theta_\lambda)\hat\sigma_{x,\lambda}+i\mathcal D(\theta_\lambda)\hat\sigma_{y,\lambda}\right)\ket{0}_{ab}\ket{\lambda}_s \\\nonumber &= (\mathcal A(\theta_\lambda)+i\mathcal B(\theta_\lambda))\ket{0}_{ab}\ket{\lambda}_s + (i\mathcal C(\theta_\lambda)-\mathcal D(\theta_\lambda))\ket{0\lambda^\perp}_{abs}, \quad (\bra{0}_{ab}\bra{\lambda}_s)\ket{0\lambda^\perp}_{abs}=0, \end{align} where $\theta_\lambda = \cos^{-1}{(\lambda)}$. The Pauli matrices $\hat{I}_\lambda,\hat{\sigma}_{x,\lambda},\hat{\sigma}_{y,\lambda},\hat{\sigma}_{z,\lambda}$ act on the two-dimensional subspace $\mathcal{H}_\lambda=\text{span}\{\ket{0}_{ab}\ket{\lambda}_s,\ket{0\lambda^\perp}_{abs}\}$ with bases defined through $\hat{\sigma}_{\lambda,z} \ket{0}_{ab}\ket{\lambda}_s=\ket{0}_{ab}\ket{\lambda}_s$, $\hat{\sigma}_{z,\lambda} \ket{0\lambda^\perp}_{abs}=-\ket{0\lambda^\perp}_{abs}$. The only property of the states $\ket{0\lambda^\perp}_{abs}$ that concerns us is they are mutually orthogonal, and also orthogonal to all states $\ket{0}_{ab}\ket{\lambda}_s$. Note that the functions $(\mathcal A,\mathcal B,\mathcal C,\mathcal D)$ of an angle are implicitly parameterized $\vec\phi$. We find it useful to define the functions $(A,B,C,D)$ of $\lambda$ related by a variable substitution e.g. $\mathcal A(\theta_\lambda)=A(\cos{(\theta_\lambda)})$. These functions are not independent as unitarity at the very least requires $\mathcal A^2+\mathcal B^2+\mathcal C^2+\mathcal D^2=1$. By identifying $\hat{W}_{\vec{\phi}}$ as the signal unitary and $\ket{0}_{ab}$ as the measurement basis, $(\bra{0}_{ab}\otimes\hat{I}_s)\hat{W}_{\vec{\phi}}(\ket{0}_{ab}\otimes\hat{I}_s)=A[\hat{H}']+iB[\hat{H}']$ itself encodes the matrix $A[\hat{H}']+iB[\hat{H}']$ in standard-form-$(A[\hat{H}']+iB[\hat{H}'],1,\hat{W}_{\vec{\phi}},2d)$. \begin{figure}\label{Fig:Circuit_Qubitization_QSP} \end{figure} We previously studied~\cite{Low2016methodology} sequences of single-qubit rotations isomorphic to those in Eq.~\ref{Eq:QSP_Baseline}: \begin{align} \label{Eq:QSP_Single_Qubit} e^{-i \hat{\sigma}_{\phi_{N}}\theta}e^{-i \hat{\sigma}_{\phi_{N-1}}\theta}\cdots e^{-i \hat{\sigma}_{\phi_{1}}\theta} = \mathcal{A}(\theta)\hat{I}+i\mathcal{B}(\theta)\hat\sigma_{z} + i\mathcal{C}(\theta)\hat\sigma_{x}+i\mathcal{D}(\theta)\hat\sigma_{y}, \end{align} fully characterized the functions $(\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D})$ implementable by any choice of $\vec{\phi}$, and also provided an efficient classical algorithm to invert any valid partial specification of $(\mathcal{A},\mathcal{B},\mathcal{C},\mathcal{D})$ to obtain its implementation $\vec{\phi}$. For instance, we have the following result regarding Eq.~\ref{Eq:QSP_Single_Qubit} \begin{lemma}[Achievable $(\mathcal{A},\mathcal{B})$ -- Thm.~2.3 of~\cite{Low2016methodology}] \label{Lem:AchievableAB} For any integer $N>0$, a choice of functions $\mathcal{A},\mathcal{B}$ in Eq.~\ref{Eq:QSP_Single_Qubit} is achievable by some $\vec\phi\in\mathbb{R}^{N}$ if and only if all the following are true:\\ (1) $\mathcal{A}(\theta)= A(x),\mathcal{B}(\theta)= B(x)$, where $A,B$ are real parity-$(N\mod{2})$ polynomials in $x=\cos{(\theta)}$ of degree at most $N$; \\ (2) $A(1)=1$; \\ (3) $\forall x\in[-1,1]$, $A^2(x)+B^2(x)\le 1$; \\ (4) $\forall x\ge 1$, $A^2(x)+B^2(x)\ge 1$; \\ (5) $\forall L\;\text{even}, x\ge 0$, $A^2(ix)+B^2(ix)\ge 1$. \\ Moreover, $\vec\phi\in\mathbb{R}^{N}$ can be computed in classical $\mathcal{O}(\text{poly}(N))$ time. \end{lemma} This automatically implies the following quantum signal processing result regarding Eq.~\ref{Eq:QSP_Baseline}. \begin{lemma}[Quantum signal processing; adapted from~\cite{Low2016hamiltonian}] \label{Thm:QSP_AB} Given Hermitian standard-form-$(\hat{H},1,\hat{U},d)$, let any $A,B$ be degree $N$ polynomials that satisfy the conditions of Lem.~\ref{Lem:AchievableAB}. Then there exists a standard-form-$(A[\hat{H}]+iB[\hat{H}],1,\hat{W}_{\vec\phi},2d)$, where $\hat{W}_{\vec\phi}$ requires $\mathcal{O}(N)$ queries to controlled-$\hat{U}$ and $\mathcal{O}(N\log(d))$ primitive quantum gates. \end{lemma} The many other partial specifications of $(\mathcal A,\mathcal B,\mathcal C,\mathcal D)$ described in~\cite{Low2016methodology} imply analogous constructions. Relevant to us are characterizations of achievable $(\mathcal B),(\mathcal C,\mathcal D),(\mathcal D)$ stated in Lems.\ref{Thm:AchievableB},\ref{Thm:AchievableCD},\ref{Lem:AchievableD}, respectively. These powerful tools reduce the problem of designing quantum circuits for arbitrary target functions $f[\hat{H}']$ to finding good polynomial approximations to $f(x)$ over the interval $x\in[-1,1]$, of which the optimal Hamiltonian simulation result $f[\hat{H}']=e^{-i\hat{H}'t}$ in Thm.~\ref{Thm:Ham_Sim_Qubitization} is an example. In the following, we focus the query complexity as any ancilla overhead will always be $\mathcal{O}(1)\le 3$ qubits, and the additional number of primitive gates required will typically be only a multiplicative factor $\mathcal{O}(\log{(d)})$ of the query complexity. \subsection{Flexible Quantum Signal Processing} \label{Sec:QSP_Constraint_Free} Lem.~\ref{Thm:QSP_AB} would be more useful if we could drop the unintuitive constraints (4,5) that impose restriction on what the target functions must be \emph{outside} the domain of interest. In Thm.~\ref{Thm:QSP_B}, we present a generalization that computes functions with only one component $B[\hat{H}'] = (\bra{0}_{abc}\otimes\hat{I}_s)\hat{V}_{\vec{\phi}}(\ket{0}_{abc}\otimes\hat{I}_s)$ without those constraints, using an additional single-qubit ancilla register $c$. Note that this does not follow immediately from the discussion of Sec.~\ref{Sec:Standard-form_QSP_Prior_art} as the constraint $A(1)=1$ means there will always be some $A$ component, even if the characterizations of other partial specifications of $(A,B,C,D)$ are used. The trick is to exploit the structure of single-qubit rotations Eq.~\ref{Eq:QSP_Single_Qubit} to stage a perfect cancellation of the $A[\hat{H}']$ term by taking a linear combination of two standard-form encodings for $(\bra{0}_{ab}\otimes\hat{I}_s)\hat{V}_{\pm\vec{\phi}}(\ket{0}_{ab}\otimes\hat{I}_s)=A[\hat{H}']\pm iB[\hat{H}']$. \begin{proof}[Proof of Thm.~\ref{Thm:QSP_B}] Consider the composite qubiterate in Eq.~\ref{Eq:QSP_Baseline} controlled by a single-qubit ancilla $c$. Let \begin{align} \hat{V}'_{\vec\phi}=-i\ket{1}\bra{0}_c\otimes\hat{W}_{\vec\phi}+i\ket{0}\bra{1}_c\otimes\hat{W}_{-\vec\phi}=(\hat{\sigma}_{y}\otimes\hat{I}_{abs})(\ket{0}\bra{0}_c\otimes\hat{W}_{\vec\phi}+\ket{1}\bra{1}_c\otimes\hat{W}_{-\vec\phi}). \end{align} Note that details in the construction of $\hat{W}_{\vec{\phi}}$ actually allow for the implementation of $\hat{V}'_{\vec\phi}$ with the same query complexity, as seen in Figure~\ref{Fig:Circuit_Qubitization_Flexible_QSP}. By applying the similarity transformation $\hat{\sigma}_xe^{-i\hat{\sigma}_\phi\theta}\hat{\sigma}_x=e^{-i\hat{\sigma}_{-\phi}\theta}$, $\hat{\sigma}_x\hat{\sigma}_z\hat{\sigma}_x=-\hat{\sigma}_z$, and $\hat{\sigma}_x\hat{\sigma}_y\hat{\sigma}_x=-\hat{\sigma}_y$, \begin{align} \hat{W}_{-\vec{\phi}}\ket{0}_{ab}\ket{\lambda}_s&=e^{-i \hat{\sigma}_{-\phi_{N}}\theta_\lambda}e^{-i \hat{\sigma}_{-\phi_{N-1}}\theta_\lambda}\cdots e^{-i \hat{\sigma}_{-\phi_{1}}\theta_\lambda}\ket{0}_{ab}\ket{\lambda}_s, \\\nonumber &= \left(A(\lambda)\hat{I}_\lambda-iB(\lambda)\hat\sigma_{z,\lambda} + iC(\lambda)\hat\sigma_{x,\lambda}-iD(\lambda)\hat\sigma_{y,\lambda}\right)\ket{0}_{ab}\ket{\lambda}_s. \end{align} Thus using the ancilla state $\ket{+}_c\ket{0}_{ab}$, where $\ket{\pm}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$, as the input to $\hat{V}'_{\vec\phi}$ results in: \begin{align} \label{Eq:Controlled_Generalized_Reflections} \hat{V}'_{\vec\phi}\ket{+}_c\ket{0}_{ab}\ket{\lambda}_s&= \left(-i A(\lambda)\ket{-}_c+B(\lambda)\ket{+}_c\right)\ket{0}_{ab}\ket{\lambda}_s+\left(C(\lambda)\ket{-}_c+D(\lambda)\ket{+}_c\right)\ket{0\lambda^\perp}_{abs}. \end{align} Thus $(\bra{+}_c\bra{0}_{ab}\otimes\hat{I}_s)\hat{V}'_{\vec{\phi}}(\ket{+}_c\ket{0}_{ab}\otimes\hat{I}_s)=B[\hat{H}']$ encodes $B[\hat{H}']$ in standard-form. Note that this is independent of all the other functions $A,C,D$ which are in general non-zero. Thus we may apply Lem.~\ref{Thm:AchievableB} on achievable $(B)$ even those all other components are in general non-zero. Finally, let $\hat{V}_{\vec\phi}=(\widehat{\text{Had}}\otimes\hat{I}_{abs})\hat{V}'_{\vec{\phi}}(\widehat{\text{Had}}\otimes\hat{I}_{abs})$. \end{proof} \begin{figure}\label{Fig:Circuit_Qubitization_Flexible_QSP} \end{figure} \begin{lemma}[Achievable $(\mathcal{B})$ -- Thm.~3.2 of~\cite{Low2016methodology}] \label{Thm:AchievableB} For any integer $N>0$, a choice of function $\mathcal{B}$ in Eq.~\ref{Eq:QSP_Single_Qubit} is achievable by some $\vec\phi\in\mathbb{R}^{N}$ if and only if all the following are true:\\ (1) $\mathcal B(\theta)= {B}(x)$, where ${B}$ is a real parity-$(N\mod{2})$ polynomial in $x=\cos{(\theta)}$ of degree at most $N$; \\ (2) $B(0)=0$; \\ (3) $\forall x\in[-1,1]$, $B^2(x)\le 1$. \\ Moreover, $\vec\phi\in\mathbb{R}^{N}$ can be computed in classical $\mathcal{O}(\text{poly}(N))$ time. \end{lemma} With Thm.~\ref{Thm:QSP_B}, we are assured that any degree $N$ bounded matrix polynomial that goes to zero at the origin can be implemented exactly on a quantum computer using $\mathcal{O}(N)$ queries, $\mathcal{O}(N)$ additional primitive quantum gates, and $\mathcal{O}(1)$ additional ancilla qubits. \section{Uniform Spectral Amplification by Quantum Signal Processing} \label{Sec:Uniform_Hamiltonian_Amplification} When provided with no information on any structure in the standard-form encoding $(\bra{0}_a\otimes \hat{I}_s)\hat{U}(\ket{0}_a\otimes \hat{I}_s)=\hat{H}/\alpha$ of the Hermitian matrix $\hat{H}$, all we have is access to the signal oracle $\hat{U}$. Thus our only option is to apply quantum signal processing and study the polynomial functions $f[\cdot]$ of $\hat{H}/\alpha$ that achieve uniform spectral amplification. In this setting, Thm.~\ref{Cor:Operator_Amplification} performs uniform spectral amplification, though the trade-off between its implementation cost and the achieved reduction of $\alpha$ provides no advantage to Hamiltonian simulation. However, a speedup is possible through Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification} when interested only in the lower energy subspace of $\hat{H}$. As the normalization $\alpha$ is always greater or equal than $\|\hat{H}\|$, any input state $\ket{\psi}$ on the system has support only on eigenstates $\hat{H}/\alpha\ket{\lambda}=\lambda\ket{\lambda}$ with eigenvalues $|\lambda| \le \|\hat{H}\|/\alpha\le 1$. Given an upper bound $\Lambda \in [ \|\hat{H}\|,\alpha]$ on the spectral norm, this means that in any polynomial function $p(x)$ that we construct, only its restriction to the domain $x\in[-\Lambda/\alpha,\Lambda/\alpha]$ is of interest, so long as $|p(x)|$ remains bounded by $1$ over $x\in[-1,1]$. Thus one approach to minimizing the normalization is to use quantum signal processing to encode a polynomial with the property $p[\hat{H}/\alpha]\approx \frac{\hat{H}}{\Lambda}$ in standard-form. Thus, we should find a polynomial that approximates a truncated linear function, such as \begin{align} \label{Eq:Linear_target_function} f_{\text{lin},\Gamma}(x)= \begin{cases} \frac{x}{2\Gamma}, & |x| \in [0, \Gamma], \\ \in [-1,1], & |x| \in (\Gamma,1]. \end{cases} \end{align} In Thm.~\ref{Thm.Polynomial_LAA} of Appendix.~\ref{Sec:Polynomials_Amplitude_Multiplication}, we approximate $f_{\text{lin},\Gamma}(x)$ with a polynomial with the following properties: $\forall\;\Gamma \in [0,1/2]$ and $\epsilon \le\mathcal{O}(\Gamma)$, the odd polynomial $p_{\text{lin},\Gamma,n}$ of degree $n=\mathcal{O}(\Gamma^{-1}\log{(1/\epsilon)})$ satisfies \begin{align} \forall\; {x\in[- \Gamma,\Gamma]},\; \left|p_{\text{lin},\Gamma,n}(x)- \frac{x}{2\Gamma}\right|\le \frac{\epsilon|x|}{2\Gamma} \quad\text{and}\quad \max_{x\in [-1,1]} |p_{\text{lin},\Gamma,n}(x)|\le 1. \end{align} This polynomial satisfies the conditions of flexible quantum signal processing in Thm.~\ref{Thm:QSP_B}, and provides us with the solution Thm.~\ref{Cor:Operator_Amplification} to uniform spectral amplification. \begin{proof}[Proof of Thm.~\ref{Cor:Operator_Amplification}] Given Hermitian standard-form-$(\hat{H},\alpha,\hat{U},d)$ and an upper bound $\Lambda\in[\|\hat{H}\|,\alpha]$, Define $\Gamma = \Lambda/\alpha\le 1$. Using Thm.~\ref{Thm:QSP_B} with the polynomial $p_{\text{lin},\Gamma,n}$, encode $p_{\text{lin},\Gamma,n}[\hat{H}/\alpha]\approx \frac{\hat{H}}{2\Gamma\alpha}=\frac{\hat{H}}{2\Lambda}$ in Hermitian standard-form-$(p_{\text{lin},\Gamma,n}[\hat{H}/\alpha],1,\hat{V},4d)$. This requires $\mathcal{O}(n)$ queries, and is identical to the Hermitian standard-form-$(2\Lambda p_{\text{lin},\Gamma,n}[\hat{H}/\alpha],2\Lambda ,\hat{V},4d)$. Define $\hat{H}_{\text{amp}}=2\Lambda p_{\text{lin},\Gamma,n}[\hat{H}/\alpha]$. Then the error of approximation $\left\|\frac{\hat{H}_{\text{amp}}}{2\Lambda}-\frac{\hat{H}}{2\Lambda}\right\|\le \max_{x\in[-\Lambda,\Lambda]} \left|p_{\text{lin},\Gamma,n}\left(\frac{x}{\alpha}\right)-\frac{x}{2\Lambda}\right|\le \max_{x\in[-\Gamma,\Gamma]} \left|p_{\text{lin},\Gamma,n}(x)-\frac{x}{\Gamma}\right|\le \frac{\epsilon_1}{2}$. Finally, note that $p_{\text{lin},\Gamma,n}$ requires $\epsilon_1\le\mathcal{O}(\Gamma)$, and has degree scaling like $n=\mathcal{O}(\Gamma^{-1}\log{(1/\epsilon_1)})$, so let us define $\epsilon = \frac{\epsilon_1}{2}$. \end{proof} \begin{comment} Though we do provide this polynomial with degree $\mathcal{O}(\alpha\Lambda^{-1}\log{(1/\epsilon)})$ in Cor.~\ref{Lem:Polynomial_Truncated_Linear}, a shorter alternate proof of Thm.\ref{Cor:Operator_Amplification} uses previous results Thms.\ref{Thm:Ham_Sim_Qubitization} and \ref{Thm:Standard_Form_From_Ham_Sim}, and has a query complexity worse by only an additive term $\frac{\log{(1/\epsilon)}\log({\log{(1/\epsilon)}/\epsilon})}{\log\log({\log{(1/\epsilon)}/\epsilon})}$. \begin{proof}[Proof of Thm.~\ref{Cor:Operator_Amplification} - alternate version] Given $(\bra{0}\otimes \hat{I})\hat{U}(\ket{0}\otimes \hat{I})=\hat{H}/\alpha$, apply Thm.~\ref{Thm:Ham_Sim_Qubitization} to implement $e^{-i\hat{H}_{\text{approx}}/(2\Lambda)}$ that approximates $e^{-i\hat{H}/(2\Lambda)}$ with error $\|e^{-i\hat{H}_{\text{approx}}/(2\Lambda)}-e^{-i\hat{H}/(2\Lambda)}\|\le \epsilon_1$. Thus uses $N_1=\mathcal{O}(\alpha\Lambda^{-1}+\frac{\log({1/\epsilon_1})}{\log\log({1/\epsilon_1})})$ queries to $\hat{U}$. Provided that $\|\frac{\hat{H}}{\Lambda}\|\le 1$, we can apply Thm.~\ref{Thm:Standard_Form_From_Ham_Sim} to encode the Hamiltonian $\hat{H}_\text{amp}$ in standard-form with normalization $(2\Lambda)$ such that $\frac{1}{2\Lambda}\|\hat{H}_\text{amp}-\hat{H}_{\text{approx}}\|\le \epsilon_2$. This uses $N_2=\mathcal{O}( \log{(1/\epsilon_2)})$ queries to $e^{-i\hat{H}_{\text{approx}}/(2\Lambda)}$. By adding these error, $\frac{1}{2\Lambda}\|\hat{H}_\text{amp}-\hat{H}\|\le \epsilon_2+N_2 \epsilon_1$. Let $\epsilon_2=\epsilon/2$ and $\epsilon_1 = \epsilon/(2N_2)$. Then the total query complexity $N_1 N_2 = \mathcal{O}((\alpha\Lambda^{-1}+\frac{\log({\log{(1/\epsilon)}/\epsilon})}{\log\log({\log{(1/\epsilon)}/\epsilon})})\log{(1/\epsilon)})$. \end{proof} \end{comment} Unfortunately, this provides absolutely no advantage to Hamiltonian simulation as the decrease in normalization by factor $\alpha/\Lambda$ is exactly balanced by an increase in query complexity by factor $\alpha/\Lambda$. Nevertheless, Thm.~\ref{Cor:Operator_Amplification} may be of use to applications involving measurement such as quantum metrology and repeat-until-success circuits, as the success probability $\|\frac{\hat{H}}{\Lambda}\|^2$ is improved by a quadratic factor $(\alpha/\Lambda)^2$. This is analogous to oblivious amplitude amplification which only applies to matrices that are approximately unitary~\cite{Berry2014}. One workable possibility is highlighted by the deep connection between quantum signal processing and the properties of polynomials. Thm.~\ref{Cor:Operator_Amplification} uses a degree $\mathcal{O}(\Lambda^{-1})$ polynomial with maximum gradient $\mathcal{O}(\Lambda^{-1})$. Yet a famous inequality by Markov indicates a best-case quadratic advantage in the gradient $p'$ of any degree $n$ polynomial $\max_{x\in[-1,1]}|p'(x)|\le n^2\max_{x\in[-1,1]}|p(x)|$. Thus we have not fully exhausted the capabilities of polynomials. As this inequality becomes an equality for Chebyshev polynomials of the first kind $T_L(x)=\cos{(L \cos^{-1}{(x)})}$ at $x=\pm1$, this suggests that a speedup is possible if we are only concerned with time evolution on eigenstates with eigenvalues $|\lambda| \in [1-\Delta,1]$ where $\Delta\ll 1$. With this assumption, we may prove Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification}. \begin{proof}[Proof of Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification}] Consider the truncated linear function \begin{align} \label{Eq:Linear_target_function} f_{\text{gap},\Delta}(x)= \begin{cases} \frac{x+1-\Delta}{\Delta}, & x \in [-1, -1+\Delta], \\ \in[-1,1], & \text{otherwise}. \end{cases} \end{align} As $\hat{\Pi}(f_{\text{gap},\Delta}[\frac{\hat{H}}{\alpha}]-\frac{\hat{H}+\alpha\hat{I}(1-\Delta)}{\Delta\alpha})\hat{\Pi}=0$, the theorem is proven by finding degree $n$ odd polynomial $p_{\text{gap},\Delta,n}(x)$ that uniformly approximates $f_{\text{gap},\Delta}(x)$ with error $\max_{x\in[-1,-1+\Delta]}|p_{\text{gap},\Delta,n}(x)-f_{\text{gap},\Delta}(x)|\le\epsilon$ and also satisfies all the conditions of quantum signal processing Thm.~\ref{Thm:QSP_B}. We provide such a polynomial of degree $\mathcal{O}(\Delta^{-1/2}\log^{3/2}{(\frac{1}{\Delta\epsilon})})$ in Lem.~\ref{Lem.Polynomial_gapped_linear} of Appendix.\ref{Sec:Polynomials_Low_energy}. And so we define $\frac{\hat{H}_{\text{amp}}}{\Delta\alpha} = p_{\text{gap},\Delta,n}[\frac{\hat{H}}{\alpha}]$, which approximates the desired amplified Hamiltonian with error $\|\hat{\Pi}(\frac{\hat{H}_{\text{amp}}}{\Delta\alpha}-\frac{\hat{H}+\alpha\hat{I}(1-\Delta)}{\Delta\alpha})\hat{\Pi}\| \le \max_{x\in[-1,-1+\Delta]}|p_{\text{gap},\Delta,n}(x)-\frac{ x+(1-\Delta)}{\Delta}|\le \epsilon$. \end{proof} As energy gaps in an interval of width $\Delta$ are stretched by factor $\Delta^{-1}$ using only $\mathcal{O}(\Delta^{1/2})$ queries, a quadratic advantage in normalization is achieved. This is essentially spectral gap amplification~\cite{Somma2013SpectralGap} with two important distinctions: first, it applies to any Hamiltonian through the standard-form, though as highlighted in~\cite{Somma2013SpectralGap}, only those encoded with $\alpha=\|\hat{H}\|$, such as frustration-free Hamiltonians, can fully exploit the effect. Second, it amplifies the spectral gap of all eigenvalues uniformly, rather than non-uniformly. By combining with Thm.~\ref{Thm:Ham_Sim_Qubitization}, one obtains a Hamiltonian simulation algorithm for low-energy subspaces, relevant to quantum chemistry and adiabatic computation. \begin{corollary}[Hamiltonian simulation of low-energy subspaces] \label{Cor:Ham_Sim_Spectral_Amplification} Given Hermitian standard-form-$(\hat{H},\alpha,\hat{U},d)$ with eigenstates $\hat{H}/\alpha\ket{\lambda}=\lambda\ket{\lambda}$, let $\Delta \in(0,1)$ be a positive constant, and $\hat{\Pi}=\sum_{\lambda \in[-1,-1+\Delta]}\ket{\lambda}\bra{\lambda}$ be a projector onto the low-energy subspace of $\hat{H}$. Then time-evolution $e^{-i\hat{H}t}$ on eigenstates with eigenvalues $\lambda \in [-1,-1+\Delta]$ can be approximated with error $\epsilon$ using $\mathcal{O}(t\alpha\sqrt{\Delta}\log^{3/2}{(\frac{t\alpha}{\epsilon})}+\Delta^{-1/2}\log^{5/2}{(\frac{t\alpha}{\epsilon})})$ queries to controlled-$\hat{U}$. \end{corollary} \begin{proof} This follows from multiplying the query complexities of Thm.~\ref{Thm:Ham_Sim_Qubitization} with Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification}, similar to the proof of Cor.~\ref{Cor:HamExponentials}, to obtain a cost of $\mathcal{O}\left(t\alpha\Delta+\frac{\log{(1/\epsilon_1)}}{\log\log{(1/\epsilon_1)}}\right)\mathcal{O}(\Delta^{-1/2}\log^{3/2}{(\frac{1}{\Delta\epsilon_2})})$ queries for approximating $e^{-i\hat{H}t}$ with error $\epsilon_1 + t\alpha\Delta \epsilon_2$. Thus we choose $\epsilon_1=\epsilon/2$ and $t\alpha\Delta \epsilon_2=\epsilon/2$. \end{proof} It is worth mentioning that Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification} also performs uniform spectral amplification on \emph{high} energy states. This follows from the polynomial $p_{\text{gap},\Delta,n}(x)$ being odd. Thus its ability to stretch eigenvalues $\lambda\in[-1,-1+\Delta]$ applies to those $\lambda\in[1-\Delta,1]$ as well. \section{Amplitude Amplification Techniques} \label{Sec:AA_by_QSP} Amplitude amplification is a staple quantum subroutine for state preparation that used in many quantum algorithms. The basic version, is based on reflections, in described in Sec.~\ref{Sec:Amplitude_Amplification}. The most common generalization of amplitude amplification replaces the reflection with partial reflections. This allows for constructing more interesting variations in the final state amplitude as a function of the initial state amplitudes, though a systematic approach to designing these variations is not known to prior art. We show in Sec.~\ref{Sec:AA_partial_ref} that these functions are polynomials subject to certain constraints and solve the design problem through Lem.~\ref{Thm:Generalized_Amplitude_Amplification}. We then generalize this in Sec.~\ref{Sec:Flexible_AA} to obtain the flexible amplitude amplification Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification} that relaxes some constraints on these polynomials. In in Sec.~\ref{Sec:Amp_Mult}, an application of flexible amplitude amplification with a particular choice of polynomials yields the amplitude multiplication Thm.~\ref{Thm:Linear_Amplitude_Amplification}. \subsection{Amplitude Amplification} \label{Sec:Amplitude_Amplification} Amplitude amplification is a quantum algorithm for state preparation. Suppose the state creation operator $\hat{G}$ prepares the start state $\ket{s}=\hat{G}\ket{0}\in\mathbb C^d$ from the computational basis. The start state has overlap $\sin{(\theta)}=\langle t\ket{s}$ with the target state $\ket{t}$ is thus \begin{align} \ket{s}=\sin{(\theta)}\ket{t}+\cos{(\theta)}\ket{t^\perp}, \quad \langle t\ket{t^{\perp}}=0, \end{align} and the goal is to prepare the state $\ket{t}$. The standard solution to this problem boosts the amplitude $\sin{(\theta)}$ of $\ket{t}$ to $\mathcal{O}(1)$. This requires access to two oracles that perform reflections about $\ket{s},\ket{t}$ respectively: \begin{align} \widehat{\text{Ref}}_{\ket{s}}=\hat{I}-2\ket{s}\bra{s}=\hat{G}(\hat{I}-2\ket{0}\bra{0})\hat{G}^\dag=\hat{G}\widehat{\text{Ref}}_{\ket{0}}\hat{G}^\dag,\quad \widehat{\text{Ref}}_{\ket{t}}=\hat{I}-2\ket{t}\bra{t}. \end{align} As $(\hat{I}-2\ket{0}\bra{0})$ is a conditional phase gate, it may be implemented with $\mathcal{O}(\log(d))$ primitive quantum gates. The cost of implementing reflections about an arbitrary target state $\widehat{\text{Ref}}_{\ket{t}}$ it not always as straightforward. However, this cost is typically built into a definition of $\hat{G}$ that marks the target state with a single flag qubit subscripted by $b$. In other words. $\hat{G}\ket{0}_a\ket{0}_b=\sin{(\theta)}\ket{t}_a\ket{0}_b+\cos{(\theta)}\ket{t^\perp}_{ab}$. By defining the new target state as $\ket{t}_a\ket{0}_b$, a reflection about $\ket{t}_a\ket{0}_b$ may be constructed with a single $\hat{I}_a\otimes\hat{\sigma}_z$ gate. The product $\widehat{\text{Ref}}_{\ket{s}}\widehat{\text{Ref}}_{\ket{t}}$, with query cost $2$, is known as the Grover iterate, and it easily shown that \begin{align} \ket{s}= \left( \begin{matrix} \cos{(\theta)} \\ \sin{(\theta)} \end{matrix} \right), \quad \widehat{\text{Ref}}_{\ket{s}}\widehat{\text{Ref}}_{\ket{t}} =\left( \begin{matrix} \cos{(2\theta)} & -\sin{(2\theta)} \\ \sin{(2\theta)} & \cos{(2\theta)} \end{matrix} \right), \end{align} in the $\{\ket{t}^\perp,\ket{t}\}$ basis. Thus we obtain the well-known result \begin{align} \label{Eq:RegularAmplitudeAmplification} (\widehat{\text{Ref}}_{\ket{s}}\widehat{\text{Ref}}_{\ket{t}})^N \ket{s} = \sin{\left((2N+1)\theta\right)}\ket{t}+\cos{\left((2N+1)\theta\right)}\ket{t^\perp}. \end{align} By choosing $N = \lceil \frac{\pi}{4\theta}-\frac{1}{2}\rceil =\mathcal{O}(1/\theta)$ repetitions, $\bra{t}(\widehat{\text{Ref}}_{\ket{s}}\widehat{\text{Ref}}_{\ket{t}})^N \hat{G}\ket{0}=\mathcal{O}(1)$ as desired with $Q=2N+1=\mathcal{O}(1/\theta)$ queries. \subsection{Amplitude Amplification by Partial Reflections} \label{Sec:AA_partial_ref} The more general phase matching technique~\cite{Long1999PhaseMatching} applies partial reflections parameterized by phases $\alpha,\beta$: \begin{align} \label{Eq:Generalized_Reflections} \widehat{\text{Ref}}_{\alpha,\ket{s}}=\hat{I}-(1-e^{-i \alpha})\ket{s}\bra{s}, \quad \widehat{\text{Ref}}_{\beta,\ket{t}}=\hat{I}-(1-e^{-i \beta})\ket{t}\bra{t}, \end{align} and the generalized Grover iterate is then $\widehat{\text{Ref}}_{\alpha,\ket{s}}\widehat{\text{Ref}}_{\beta,\ket{t}}$ which has query cost $2$. An $N=2n+1$ query sequence of these iterates produces the state \begin{align} \label{Eq:Generalized_Sequence} \prod^{n}_{k=1}\widehat{\text{Ref}}_{\alpha_k,\ket{s}}\widehat{\text{Ref}}_{\beta_k,\ket{t}} \ket{s} = (i\mathcal C(\theta)+\mathcal D(\theta))\ket{t}+(\mathcal A(\theta)-i\mathcal B(\theta))\ket{t^\perp}, \end{align} where $\widehat{\text{Ref}}_{\alpha_1,\ket{s}}\widehat{\text{Ref}}_{\beta_1,\ket{t}}$ acts first on the input, and $\mathcal A,\mathcal B,\mathcal C,\mathcal D$ are real functions parameterized by $\vec{\alpha},\vec{\beta}$. Unfortunately, the dependence of $\vec{\alpha},\vec{\beta}$ on any arbitrary choice of $\mathcal A,\mathcal B,\mathcal C,\mathcal D$ appears quite mysterious. Only in very few cases can the $\mathcal A,\mathcal B,\mathcal C,\mathcal D$ can be specified for arbitrary $N$ and then inverted to obtain a consistent set of $\vec{\alpha},\vec{\beta}$ in closed-form~\cite{Yoder2014}. For instance, standard amplitude amplification corresponds to $\alpha_k=\beta_k=\pi$. We resolve this mystery by proving the following result \begin{lemma}[Amplitude amplification with partial reflections] \label{Thm:Generalized_Amplitude_Amplification} Given a state preparation unitary $\hat{G}$ acting on the computational basis states $\ket{0}_a\in \mathbb{C}^d$, $\ket{0}_b\in \mathbb{C}^2$ such that $\hat{G}\ket{0}_a\ket{0}_b=\lambda\ket{t}_a\ket{0}_b+\sqrt{1-\lambda^2}\ket{t^\perp}_{ab}$, where $\ket{t^\perp}_{ab}$ has no support on $\ket{0}_b$, let $C,D$ be any two functions that satisfies all the following conditions: \\ (1) $C,D$, where are odd real polynomials in $\lambda$ of degree at most $2N+1$; \\ (2) $\forall \lambda\in[-1,1]$, $ {C}^2(\lambda)+ {D}^2(\lambda)\le 1$; \\ (3) $\forall \lambda\ge 1$, $ {C}^2(\lambda)+ {D}^2(\lambda)\ge 1$, \\ Then there exists a quantum circuit $\hat{V}_{\vec\phi}$ such that $\bra{t}_a\bra{0}_b\hat{V}_{\vec\phi}\ket{0}_a\ket{0}_b=i C(\lambda)+D(\lambda)$, using $N+1$ queries to $\hat{G}$, $N$ queries to $\hat{G}^\dag$, and $\mathcal{O}(n\log{(d)})$ primitive quantum gates pre-computed from $C,D$ in classical $\mathcal{O}(\text{poly}(N))$ time. \end{lemma} This result is quite remarkable as the constraints are lax and allow for many interesting functions. For instance, choosing $ {C}(y) = \pm T_{2N+1}(y) = \sin{((2N+1)\theta)}$ to be Chebyshev polynomials of the first kind and ${D}(y)=0$, recovers the baseline amplitude amplification algorithm. The application of Lem.~\ref{Thm:Generalized_Amplitude_Amplification} requires finding a good polynomial approximation, say ${D}$ to the target function. However, it is not always clear how constraint (3) on properties of the polynomial outside the interval of interest may always be satisfied. We rectify this in Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification} by adding an additional ancilla qubit to stage a cancellation of the $C$ term, similar to the proof of Thm.~\ref{Thm:QSP_B}. Subject only to parity and being bounded, we can implement without approximation any arbitrary polynomial of degree exactly equal to the number of queries to the state preparation operator $\hat{G}$. This enables us to compute any real function with a query complexity exactly that of the its best polynomial approximations thus allowing us to transfer powerful results from approximation theory~\cite{Meinardus1967} to quantum computation. \begin{proof}[Proof of Lem.~\ref{Thm:Generalized_Amplitude_Amplification}.] Our starting point is $Q=2N+1$ query sequence of Eq.~\ref{Eq:Generalized_Sequence}. We define $\hat{G}$ to mark the target state with an ancilla flag qubit $b$ e.g. $\ket{t}\rightarrow \ket{t}_a\ket{0}_b$, $\ket{t^\perp}\rightarrow \ket{t^\perp}_{ab}$, where $\ket{t^\perp}_{ab}$ has no support on $\ket{0}_b$ This allows us to perform partial reflections about $\ket{t}$ using single-qubit phase gates. Let us re-express the generalized reflection in Eq.~\ref{Eq:Generalized_Reflections} as: \begin{align} \widehat{\text{Ref}}_{\alpha,\ket{s}}&=\hat{I}_{ab}-(1-e^{-i \alpha})\hat{G}\ket{0}\bra{0}_{ab}\hat{G}^{\dag} = \hat{G}\left(\hat{I}_{ab}-(1-e^{-i \alpha})\ket{0}\bra{0}_{ab}\right)\hat{G}^{\dag} = \hat{G}\widehat{\text{Ref}}_{\alpha,\ket{0}}\hat{G}^{\dag}. \end{align} If $\ket{0}_{ab}$ is of dimension $2d$, $\widehat{\text{Ref}}_{\alpha,\ket{0}}$ is a conditional phase gate and may be implemented with $\mathcal{O}(\log(d))$ primitive gates. As $\text{span}\{\ket{t}_a\ket{0}_b,\ket{t^\perp}_{ab}\}$ is an invariant subspace of $\widehat{\text{Ref}}_{\alpha,\ket{s}}\widehat{\text{Ref}}_{\beta,\ket{t}}$, we may represent it equivalently with Pauli matrices $\hat{\sigma}_{x,y,z}$ through the replacements \begin{align} \hat{G}&\rightarrow e^{-i \hat{\sigma}_y\theta}=\left( \begin{matrix} \cos{(\theta)} & -\sin{(\theta)} \\ \sin{(\theta)} & \cos{(\theta)} \end{matrix} \right), \quad \hat{G}^{\dag} \rightarrow e^{i \hat{\sigma}_y\theta} =\left( \begin{matrix} \cos{(\theta)} & \sin{(\theta)} \\ -\sin{(\theta)} & \cos{(\theta)} \end{matrix} \right), \\ \nonumber e^{i\alpha/2}\widehat{\text{Ref}}_{\alpha,\ket{0}}&\rightarrow e^{-i \hat{\sigma}_z\alpha/2} = \left( \begin{matrix} e^{i\alpha/2} & 0 \\ 0 & e^{-i\alpha/2} \end{matrix} \right), \quad e^{i\beta/2}\widehat{\text{Ref}}_{\beta,\ket{t}}\rightarrow e^{-i \hat{\sigma}_z\beta/2} = \left( \begin{matrix} e^{i\beta/2} & 0 \\ 0 & e^{-i\beta/2} \end{matrix} \right). \end{align} Thus $\widehat{\text{Ref}}_{\alpha,\ket{s}}\widehat{\text{Ref}}_{\beta,\ket{t}}=e^{-i(\alpha+\beta)/2}e^{i \hat{\sigma}_y\theta}e^{-i \hat{\sigma}_z\alpha/2}e^{-i \hat{\sigma}_y\theta}e^{-i \hat{\sigma}_z\beta/2}$ in this subspace. Though applying $\hat{G}^{\dag}$ in general takes us out of the subspace, this operator is always paired with $\hat{G}$ in the Grover iterate and never occurs in isolation -- the representation is faithful. This sequence of alternating $\hat\sigma_{y,z}$ rotations motivate us to define the operator for rotations by angle $\theta$ about an axis in the $\hat\sigma_x$--$\hat\sigma_y$ plane of the Bloch sphere: \begin{align} \label{Eq:BlochSphereXYRotation} e^{-i \hat{\sigma}_\phi\theta}&=e^{-i \hat{\sigma}_z(\pi/2+\phi)/2}e^{-i \hat{\sigma}_y\theta}e^{i \hat{\sigma}_z(\pi/2+\phi)/2}=\left( \begin{matrix} \cos{(\theta)} & -i e^{-i\phi}\sin{(\theta)} \\ -i e^{i\phi}\sin{(\theta)} & \cos{(\theta)} \end{matrix} \right), \end{align} where $\hat{\sigma}_\phi=\cos{(\phi)}\hat\sigma_x+\sin{(\phi)}\hat\sigma_y$. We would like to express Eq.~\ref{Eq:Generalized_Sequence} as a product of just these $Q=2N+1$ rotations $e^{-i \hat{\sigma}_{\phi_k}\theta}$. Thus we replace the input state $\hat{G}\ket{0}_{ab}=\hat{G}e^{i\alpha_0}\widehat{\text{Ref}}_{\alpha_0,\ket{0}}\ket{0}_{ab}$, and obtain \begin{align} \hat{V}_{\vec\alpha,\vec\beta}&=e^{i\alpha_0}\left(\prod^{N}_{k=1}\widehat{\text{Ref}}_{\alpha_k,\ket{s}}\widehat{\text{Ref}}_{\beta_k,\ket{t}} \right)\hat{G}\widehat{\text{Ref}}_{\alpha_0,\ket{0}}. \end{align} Promised that $\hat{V}_{\vec\alpha,\vec\beta}$ always acts on input state $\ket{0}_{ab}$, the fact $\hat{G}\ket{0}=e^{-i \hat{\sigma}_y\theta}\ket{t^\perp}$ permits the representation. \begin{align} \hat{V}_{\vec\alpha,\vec\beta} &= e^{i\alpha_0/2-i\sum^n_{k=1}(\alpha_k+\beta_k)/2}\left(\prod^{N}_{k=1}e^{i \hat{\sigma}_y\theta}e^{-i \hat{\sigma}_z\alpha_k/2}e^{-i \hat{\sigma}_y\theta}e^{-i \hat{\sigma}_z\beta_k/2}\right)e^{-i \hat{\sigma}_y\theta}e^{-i \hat{\sigma}_z\alpha_0/2}. \end{align} Since we have the identity $e^{i \hat{\sigma}_y\theta}=e^{-i \hat{\sigma}_z \pi}e^{-i \hat{\sigma}_y\theta}e^{i \hat{\sigma}_z \pi}$, and all $e^{-i \hat{\sigma}_y}$ in Eq.~\ref{Eq:Generalized_Reflections} are sandwiched between $\hat\sigma_z$ rotations, we replace these with the $\hat\sigma_x$--$\hat\sigma_y$ rotations of Eq.~\ref{Eq:BlochSphereXYRotation} and define the composite iterate $\hat{V}_{\vec\phi}$ in Fig.~\ref{Fig:Circuit_AmpAmp_QSP} \begin{align} \label{Eq:Composite_Iterate} \hat{V}_{\vec\phi}=e^{i\Phi}\hat{V}_{\vec\alpha,\vec\beta}=\left(\prod^{2N+1}_{k=1}e^{-i \hat{\sigma}_{\phi_k}\theta}\right)=\mathcal A(\theta)\hat{I}+i\mathcal B(\theta)\hat\sigma_z+i\mathcal C(\theta)\hat\sigma_x+i\mathcal D(\theta)\hat\sigma_y, \end{align} where $\Phi$, which depends only on $\vec\alpha,\vec\beta$, is chosen to cancel the global phase of $\hat{V}_{\vec\alpha,\vec\beta}$, $\vec{\phi}$ depends linearly on $\vec{\alpha},\vec{\beta}$, and the decomposition into the Pauli basis is always possible for $\text{SU}(2)$ matrices. By replacing the product of two-parameters generalized Grover iterates in Eq.~\ref{Eq:Generalized_Sequence} with a product of more fundamental and simpler one-parameter single-qubit rotations in Eq.~\ref{Eq:Composite_Iterate}, the structure underlying generalized amplitude amplification is made clearer. As these single-qubit rotations isomorphic to those considered in quantum signal processing Eq.~\ref{Eq:QSP_Single_Qubit}, we may apply Lem.~\ref{Thm:AchievableCD} that characterizes any achievable $(\mathcal D)$. Other choices from~\cite{Low2016methodology} such as $(\mathcal A,\mathcal B)$, $(\mathcal A,\mathcal C)$ etc. are also possible. \end{proof} \begin{lemma}[Achievable $(\mathcal{C},\mathcal{D})$ -- Thm.~2.4 of~\cite{Low2016methodology}] \label{Thm:AchievableCD} For any odd integer $N>0$, a choice of functions $\mathcal C,\mathcal D$ in Eq.~\ref{Eq:QSP_Single_Qubit} is achievable by some $\vec\phi\in\mathbb{R}^{N}$ if and only if all the following are true:\\ (1) $\mathcal{C}(\theta)= C(y),\mathcal{D}(\theta)= D(y)$, where $C,D$ are odd real polynomials in $y=\sin{(\theta)}$ of degree at most $N$; \\ (2) $\forall y\in[-1,1]$, $\mathcal{C}^2(y)+\mathcal{D}^2(y)\le 1$; \\ (3) $\forall y\ge 1$, $\mathcal{C}^2(y)+\mathcal{D}^2(y)\ge 1$. \\ Moreover, $\vec\phi\in\mathbb{R}^{N}$ can be computed in classical $\mathcal{O}(\text{poly}(N))$ time. \end{lemma} \begin{figure}\label{Fig:Circuit_AmpAmp_QSP} \end{figure} \subsection{Flexible Amplitude Amplification} \label{Sec:Flexible_AA} By taking a superposition of the state prepared by Lem.~\ref{Thm:Generalized_Amplitude_Amplification}, we may stage a cancellation of $\mathcal{C}$ function on the target state in Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification}. This allows us to prepare states with amplitudes dictated only by $\mathcal{D}$. \begin{proof}[Proof of Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification}.] Consider the composite iterate in Eq.~\ref{Eq:Composite_Iterate} controlled by a single-qubit ancilla register indexed by subscript $c$. \begin{align} \hat{W}_{\vec\phi}=\hat{V}_{\vec\phi}\otimes \ket{+}\bra{+}_c+\hat{V}_{\pi-\vec\phi}\otimes \ket{-}\bra{-}_c, \end{align} where $\ket{\pm}=\frac{1}{\sqrt{2}}(\ket{0}\pm\ket{1})$. Note that this can be implemented by controlling $\widehat{\text{Ref}}_{\alpha,\ket{0}},\widehat{\text{Ref}}_{\beta,\ket{t}}$ in $\hat{V}_{\vec\phi}$. The number of queries to $\hat{G},\hat{G}^\dag$ is unchanged and $\hat{G},\hat{G}^\dag$ need not be controlled unitaries. Thus $\hat{W}_{\vec\phi}$ still has query complexity $N=2n+1$ equal to $\hat{V}_{\vec\phi}$. From the similarity transformation $\hat{\sigma_y}e^{-i\hat{\sigma}_\phi\theta}\hat{\sigma_y}=e^{-i\hat{\sigma}_{\pi-\phi}\theta}$, \begin{align} \hat{V}_{\vec\pi-\vec\phi}=\mathcal A(\theta)\hat{I}-i\mathcal B(\theta)\hat\sigma_z-i\mathcal C(\theta)\hat\sigma_x+i\mathcal D(\theta)\hat\sigma_y, \end{align} where $\vec\pi$ is the vector where all elements are $\pi$. This allows us to stage a cancellation of $\mathcal C$ when $\hat{W}_{\vec\phi}$ is controlled by the ancilla state $\ket{0}_c$: \begin{align} \label{Eq:Controlled_Generalized_Reflections} \hat{W}_{\vec\phi}\ket{0}_a\ket{0}_b\ket{0}_c&= \mathcal D(\theta)\ket{t}_a\ket{0}_b\ket{0}_c+\mathcal A(\theta)\ket{t^\perp}_{ab}\ket{0}_c+i \mathcal C(\theta)\ket{t}_a\ket{0}_b\ket{1}_c-i \mathcal B(\theta)\ket{t^\perp}_{ab}\ket{1}_c. \end{align} where $\ket{t}_a\ket{0}_b\ket{0}_c$ is our new target state that is uniquely marked by $\ket{0}_b\ket{0}_c$. Thus the amplitude of $\mathcal D$ on the target state is completely independent of $\mathcal A,\mathcal B,\mathcal C$ regardless of what they may be. This allows us to directly apply the following result for achievable $\mathcal D$ in Lem.~\ref{Lem:AchievableD}. \end{proof} \begin{lemma}[Achievable $(\mathcal{D})$ -- Thm.~3.4 of~\cite{Low2016methodology}] \label{Lem:AchievableD} For any odd integer $N>0$, a choice of function $D$ in Eq.~\ref{Eq:Controlled_Generalized_Reflections} is achievable by some $\vec\phi\in\mathbb{R}^{N}$ if and only if all the following are true:\\ (1) $\mathcal D(\theta)= {D}(y)$, where ${D}$ is an odd real polynomial in $y=\sin{(\theta)}$ of degree at most $N$; \\ (2) $\forall y\in[-1,1]$, $\mathcal{D}^2(y)\le 1$. \\ Moreover, $\vec\phi\in\mathbb{R}^{N}$ can be computed in classical $\mathcal{O}(\text{poly}(N))$ time. \end{lemma} \subsection{Amplitude Multiplication} \label{Sec:Amp_Mult} The proof of amplitude multiplication follows from flexible amplitude amplification by an appropriate choice of polynomials for $D$. \begin{proof}[Proof of Thm.~\ref{Thm:Linear_Amplitude_Amplification}] The amplitude multiplication algorithm is a special case of Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification} where $D$ is a polynomial that approximates the truncated linear function \begin{align} \label{Eq:Linear_target_functionB} f_{\text{lin},\Gamma}(x)= \begin{cases} \frac{x}{2\Gamma}, & |x| \in [0, \Gamma], \\ \in [-1,1], & |x| \in (\Gamma,1]. \end{cases} \end{align} In Thm.~\ref{Thm.Polynomial_LAA} of Appendix.~\ref{Sec:Polynomials_Amplitude_Multiplication}, we approximate $f_{\text{lin},\Gamma}(x)$ with a polynomial with the following properties: $\forall\;\Gamma \in [0,1/2]$ and $\epsilon \le\mathcal{O}(\Gamma)$, the odd polynomial $p_{\text{lin},\Gamma,n}$ of degree $n=\mathcal{O}(\Gamma^{-1}\log{(1/\epsilon)})$ satisfies \begin{align} \forall\; {x\in[- \Gamma,\Gamma]},\; \left|p_{\text{lin},\Gamma,n}(x)- \frac{x}{2\Gamma}\right|\le \frac{\epsilon|x|}{2\Gamma} \quad\text{and}\quad \max_{x\in [-1,1]} |p_{\text{lin},\Gamma,n}(x)|\le 1. \end{align} As this polynomial satisfies the conditions of Thm.~\ref{Thm:Controlled_Generalized_Amplitude_Amplification}, there exists a state preparation unitary $ \hat{W}_{\vec\phi}\ket{0}_a\ket{0}_b\ket{0}_c = p_{\text{lin},\Gamma,n}(y)\ket{t}_a\ket{0}_b\ket{0}_c+\mathcal A(\theta)\ket{t^\perp}_{ab}\ket{0}_c+i \mathcal C(\theta)\ket{t}_a\ket{0}_b\ket{1}_c-i \mathcal B(\theta)\ket{t^\perp}_{ab}\ket{1}_c, $ where the functions $\mathcal{A},\mathcal{B},\mathcal{C}$ of lesser interest, that consists of $\mathcal{O}(n)$ queries to $\hat{G},\hat{G}^\dag$ and $\mathcal{O}(n\log{(d)})$ primitive gates. Assuming that $\Gamma\in[|\sin{(\theta)}|,1/2]$ is an upper bound on $|\sin{(\theta)}|$, the amplitude in the target state is $|\bra{t}_a\bra{0}_b\bra{0}_c\hat{W}_{\vec\phi}\ket{0}_a\ket{0}_b\ket{0}_c- \frac{\sin{(\theta)}}{2\Gamma}|\le\frac{\epsilon|\sin{(\theta)}|}{2\Gamma}$. In other words, all initial target state amplitudes $\sin{(\theta)}$ are divided by a constant factor $2\Gamma$ with an multiplicative error $\epsilon$ that can be made exponentially small. \end{proof} Note that if one is interested in multiplication by a factor less than one, trivial solutions exist. For any $\Gamma \ge 1/2$, one could prepare an ancilla state $\ket{\Gamma}_c= \frac{1}{2\Gamma}\ket{0}_c+\sqrt{1-\frac{1}{4\Gamma^2}}\ket{1}_c$ and simply define the target state to be $\ket{t}_a\ket{0}_b\ket{0}_c$ in the prepared state $\hat{G}\ket{0}_a\ket{0}_c\ket{\Gamma_c}=\frac{\sin{(\theta)}}{2\Gamma}\ket{t}_a\ket{0}_b\ket{0}_c+\cdots$. \section{Uniform Spectral Amplification by Amplitude Multiplication} \label{Sec:Ham_Sim_Overlaps} We now consider a certain kind of structure within the signal unitary $\hat{U}$ that encodes some Hamiltonian in standard-form. Whereas Sec.~\ref{Sec:Uniform_Hamiltonian_Amplification} treats $\hat{U}$ as a single oracle, we now assume that it factors into other unitaries, say $\hat{U}=\hat{U}^\dag_\text{row}\hat{U}_\text{col}$, or $\hat{U}=\hat{U}^\dag_\text{row}\hat{U}_\text{mix}\hat{U}_\text{col}$, that we assume access to as oracles. This factorization imposes in Sec.~\ref{SubSec:State_Overlaps} the interpretation that encoded Hamiltonians have matrix elements defined by the overlap between some set of quantum states. We investigate in Sec.~\ref{SubSec:Amplified_Overlap} how this structure may be exploited for uniform spectral amplification. By applying amplitude multiplication, this is possible through Lem.~\ref{Thm:Ham_Encoding_Uniform_Amplification_State_Overlaps} in a fairly general setting. In Sec.~\ref{SubSec:Reduction_Sparse_Matrices}, we specialize this to sparse Hamiltonian simulation, which leads to the improved simulation algorithm Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified}. In Sec.~\ref{Sec:Lower_Bound}, this algorithm is proven to be optimal in all parameters, at least up to logarithmic factors, through a matching lower bound Thm.~\ref{Thm:Lower_Bound}. \subsection{Matrix Elements as State Overlaps} \label{SubSec:State_Overlaps} Decomposing the signal unitary into factors motivates a different interpretation of the standard-form \begin{align} \label{Eq:Level_Two_Encoding} \frac{\hat{H}}{\alpha}=(\bra{0}_a\otimes\hat{I}_s)\hat{U}(\ket{0}_a\otimes \hat{I})=(\bra{0}_a\otimes\hat{I}_s)\hat{U}^\dag_\text{row}\hat{U}_\text{col}(\ket{0}_a\otimes \hat{I}_s) \end{align} By definition, any unitary operator implements a basis transformation $\hat{U}=\sum_{k}\ket{B_k}\bra{A_k}_{as}$ between complete orthonormal sets of basis states $\{\ket{B_k}_{as}\}$ and $\{\ket{A_k}_{as}\}$, and similarly for $\hat{U}_\text{row},\hat{U}_\text{col}$. Now consider a set of basis states $\{\ket{j}_a\}$ on the ancilla register, and a set of basis states $\{\ket{u_j}_s\}$ on the system register. Without loss of generality, we may represent $\hat{U}_\text{row}=\sum_{k}\ket{\chi_{0,k}}_{as}\bra{0}_a\bra{u_k}_s+\sum_{j\neq 0}\sum_{k}\ket{\chi_{j,k}}_{as}\bra{j}_a\bra{u_k}_s$ and $\hat{U}_\text{col}=\sum_{k}\ket{\psi_{0,k}}_{as}\bra{0}_a\bra{u_k}_s+\sum_{j\neq 0}\sum_{k}\ket{\psi_{j,k}}_{as}\bra{j}_a\bra{u_k}_s$ for some set of basis states $\{\ket{\chi_{j,k}}_{as}\}$, $\{\ket{\psi_{j,k}}_{as}\}$. Let us substitute this into Eq.~\ref{Eq:Level_Two_Encoding} and drop the $0$ subscript. \begin{align} \label{Eq:State_Overlap_Model} \frac{\hat{H}_{jk}}{\alpha}=\bra{u_j}\frac{\hat{H}}{\alpha}\ket{u_k}&=\left(\bra{0}_a\bra{u_j}_s\hat{U}^\dag_\text{row}\right)\left(\hat{U}_\text{col}\ket{0}_a\ket{u_k}_s\right)=\langle\chi_{0,j}|\psi_{0,k}\rangle_{as}=\langle\chi_{j}|\psi_{k}\rangle_{as}. \end{align} In other words, elements of $\hat{H}$ in the $\ket{u_j}_s$ basis may always be interpreted as the overlap of appropriately defined quantum states $\ket{\psi_{k}}_{as},\ket{\chi_{k}}_{as}$, which we call overlap states. Moreover, $\hat{H}$ need not unitary when the dimension of these states is greater than $\hat{H}$. More generally, we may factor the signal unitary into three unitaries $\hat{U}=\hat{U}^\dag_{\text{row}}\hat{U}_{\text{mix}}\hat{U}_{\text{col}}$. If we preserve the interpretation of $\hat{U}_{\text{row}}$ and $\hat{U}_{\text{col}}$ as preparing appropriately defined quantum states, the third unitary $\hat{U}_{\text{mix}}$ is a new component that mixes these states to encode the following Hamiltonian in standard-form \begin{align} \label{Eq:State_Overlap_Model_imperfect_3} \frac{\hat{H}}{\alpha}=(\bra{0}_a\otimes\hat{I}_s)\hat{U}^\dag_{\text{row}}\hat{U}_{\text{mix}}\hat{U}_{\text{col}}(\ket{0}_a\otimes \hat{I}_s), \quad \frac{\hat{H}_{jk}}{\alpha} &= \bra{\chi_j }_{as}\hat{U}_{\text{mix}} \ket{\psi_k}_{as}. \end{align} Note that this reduces to Eq.~\ref{Eq:State_Overlap_Model} by choosing $\hat{U}_{\text{mix}}$ to be identity, or by absorbing it into the definition of either $\hat{U}_{\text{row}}$ or $\hat{U}_{\text{col}}$. Combined with Thm.~\ref{Thm:Ham_Sim_Qubitization}, time evolution by $e^{-i\hat{H}t}$ may be approximated with error $\epsilon$ using $\mathcal{O}\big(t\alpha+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$ queries to $\hat{U}_\text{row},\hat{U}_\text{mix}$, and $\hat{U}_\text{col}$. However, the ability to efficiently prepare arbitrary quantum states represents an extremely powerful model of computation. For instance, arbitrary temperature Gibbs state preparation is QMA-complete~\cite{Gharibian2015quantum}. That not all states may be prepared in $\mathcal{O}(1)$ queries to commonly used quantum oracles can be built into the definition of the overlap states by splitting them into `good' components $\ket{\tilde\psi_j}_{a_1s},\ket{\tilde\chi_j}_{a_1s}$ marked by an ancilla state $\ket{0}_{a_2}$, and `bad' components that are discarded. Difficult states then have a small amplitude in the $\ket{0}_{a_2}$ subspace. Thus \begin{align} \label{Eq:State_Overlap_Model_imperfect} \ket{\psi_j}_{as}&=\sqrt{\lambda_{\beta}\beta_j} \ket{\tilde \psi_j}_{a_1 s}\ket{0}_{a_2}+\sqrt{1-\lambda_{\beta}\beta_j}\ket{\psi_{\text{bad},j}}_{a_1s}\ket{1}_{a_2}, \\ \nonumber \ket{\chi_j}_{as}&=\sqrt{\lambda_{\gamma}\gamma_j} \ket{\tilde \chi_j}_{a_1s}\ket{0}_{a_2}+\sqrt{1-\lambda_{\gamma}\gamma_j}\ket{\chi_{\text{bad},j}}_{a_1s}\ket{2}_{a_2}. \end{align} Note that the dimension of the ancilla register $a_1a_2$ is equal to $a$. The coefficients $\lambda_{\gamma},\lambda_{\beta} \in (0,1]$ represent a slowdown factor due to the difficulty of state preparation, and the coefficients $\beta_j,\gamma_j\in[0,1]$ normalized to $\max_{j}\beta_j = 1,\max_{j}\gamma_j = 1$ represent how the amplitude in good states can be index-dependent by design. By restricting $\hat{U}_\text{mix}$ to be identity on the register $a_2$, this encodes the following Hamiltonian in standard-form \begin{align} \label{Eq:State_Overlap_Model_imperfect_3} \frac{\hat{H}}{\alpha}=(\bra{0}_a\otimes\hat{I}_s)\hat{U}^\dag_{\text{row}}\hat{U}_{\text{mix}}\hat{U}_{\text{col}}(\ket{0}_a\otimes \hat{I}_s), \; \frac{\hat{H}_{jk}}{\alpha}= \langle \chi_j |_{as} \hat{U}_{\text{mix}}|\psi_k \rangle_{as} = \sqrt{\Lambda_{\gamma}\Lambda_{\beta}\gamma_j\beta_k}\langle \tilde\chi_j |_{a_1s}\hat{U}_\text{mix}| \tilde\psi_k \rangle_{a_1s}. \end{align} By explicitly including the slowdown factor $\sqrt{\lambda_{\gamma}\lambda_{\beta}}$, the spectral norm $\|\hat{H}\| \le \alpha \sqrt{\lambda_{\gamma}\lambda_{\beta}}$ is also reduced. \subsection{Amplitude Multiplication of Overlap States} \label{SubSec:Amplified_Overlap} This \emph{state overlap} encoding of Hamiltonians motivates the use of amplitude amplification. As the amplitudes of all states $\ket{\tilde \psi_j}$ are attenuated by a constant factor $\sqrt{\lambda_\beta}$, the intuition is that one requires $\mathcal{O}(1/\sqrt{\lambda_\beta})$ queries to the state preparation operator $\hat{U}_{\text{row}}$ to boost the amplitude in the subspace marked by $\ket{0}_b$ by a factor $\mathcal{O}(1/\sqrt{\lambda_\beta})$, and similarly for $\ket{\tilde\chi_j}$. Thus $\mathcal{O}(1/\sqrt{\lambda_\beta}+1/\sqrt{\lambda_\gamma})$ queries appears sufficient to reduce the normalization $\alpha$ by a factor $\sqrt{\lambda_{\gamma}\lambda_{\beta}}$. This suggests that a query complexity of Hamiltonian simulation could be improved to $\mathcal{O}\big(t\alpha(\sqrt{\lambda_{\gamma}}+\sqrt{\lambda_{\beta}})+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$, which is most advantageous when $\lambda_\beta$ and $ \lambda_\gamma$ are both small. However, realizing this speedup is non-trivial. In the context of prior art in sparse Hamiltonian simulation, attempts have been made to exploit amplitude amplification~\cite{Berry2012}. There, it was discovered that the sinusoidal non-linearity of amplitude amplification introduces large errors. As these error accumulate over long simulation times $t$, controlling them led to query complexity scaling like $\mathcal{O}(t^{3/2}/\epsilon)$, which is polynomially worse than what intuition suggests. In the following, we avoid these issues by introducing a linearized version of amplitude amplification, which we call the \emph{amplitude multiplication} algorithm. Before proceeding, note that amplitude amplification also imposes additional restrictions on the form of the overlap states in Eq.~\ref{Eq:State_Overlap_Model_imperfect}. Amplitude amplification requires the ability to perform reflections $\widehat{\text{Ref}}_{\ket{0}_{a_1}}$ about the subspace marked by $\ket{0}_{a_1}$, as well as reflections $\widehat{\text{Ref}}_{\psi}$ on any arbitrary superposition of initial states $\ket{\psi_j}$, that is $\forall j,\;\widehat{\text{Ref}}_{\psi}\hat{U}_\text{col}\ket{0}_a\ket{u_j}_s= -\hat{U}_\text{col}\ket{0}_a\ket{u_j}_s$, and $\widehat{\text{Ref}}_{\psi}$ performs identity for any other ancilla state. The case for $\hat{U}_\text{row}$ and $\ket{\chi_j}$ is identical. Whereas the first operation \begin{align} \widehat{\text{Ref}}_{\ket{0}_{a_2}}=(\hat{I}_{a_2}-2\ket{0}\bra{0}_{a_2})\otimes \hat{I}_{a_1s}, \end{align} is easy using $\mathcal{O}(1)$ primitive gates, the second operation requires $\hat{U}_\text{col}$ to represent controlled state preparation. In other words, with the input $\ket{s_j}$ on the system register, the overlap state has the decomposition \begin{align} \label{Eq:State_Overlap_Model_imperfect_3_decomposed} \hat{U}_\text{col}\ket{0}_a\ket{u_j}_s=\ket{\psi_j}=\left(\sqrt{\lambda_{\beta}\beta_j} \ket{\bar \psi_j}_{a_1}\ket{0}_{a_2}+\sqrt{1-\lambda_{\beta}\beta_j}\ket{\bar \psi_{\text{bad},j}}_{a_1} \ket{1}_{a_2}\right)\ket{u_j}_s, \end{align} thus encoding the following Hamiltonian in standard-form \begin{align} \label{Eq:Level_Three_Encoding} \frac{\hat{H}}{\alpha}=(\bra{0}_a\otimes\hat{I}_s)\hat{U}^\dag_{\text{row}}\hat{U}_{\text{mix}}\hat{U}_{\text{col}}(\ket{0}_a\otimes \hat{I}), \quad \frac{\hat{H}_{jk}}{\alpha}= \sqrt{\lambda_\gamma\lambda_\beta\gamma_j\beta_k} \bra{u_j}_s\bra{\bar \chi_j}_{a_1}\hat{U}_\text{mix}\ket{\bar \psi_k}_{a_1}\ket{u_k}_s, \end{align} and allowing us to construct the controlled-reflection operator \begin{align} \widehat{\text{Ref}}_\psi=\sum_j(\hat{I}_a-2\ket{0}_{a_2}\ket{\bar\psi_j}_{a_1}\bra{\bar\psi_j}_{a_1}\bra{0}_{a_2})\otimes \ket{u_j}\bra{u_j}_s=\hat{U}_{\text{col}}((\hat{I}_{a}-2\ket{0}\bra{0}_{a})\otimes \hat{I}_{s})\hat{U}^\dag_{\text{col}}, \end{align} using $2$ queries and $\mathcal{O}(\log d)$ primitive gates. The error introduced by a naive application of amplitude amplification is illustrated by an explicit calculation. Using a sequence of $m\ge 0$ controlled-Grover iterates $\widehat{\text{Ref}}_\psi\widehat{\text{Ref}}_{\ket{0}_{a_2}}$ making $\mathcal{O}(m)$ queries, one can prepare the state \begin{align} \ket{\psi_{\text{amp},j}} &= \left(\widehat{\text{Ref}}_\psi\widehat{\text{Ref}}_{\ket{0}_{a_2}}\right)^m\hat{U}_{\text{col}}\ket{0}_a\ket{u_j}_s \\ &= \left(\sin{\left((2m+1)\sin^{-1}{\left(\sqrt{\lambda_\beta\beta_j}\right)}\right)} \ket{\bar \psi_j}_{a_1}\ket{0}_{a_2}+\cdots \ket{1}_{a_2}\right)\ket{u_j}_s = \left(\sqrt{\beta'_j} \ket{\bar \psi_j}_{a_1}\ket{0}_{a_2}+\cdots \ket{1}_{a_2}\right)\ket{u_j}_s. \nonumber \end{align} With the choice $m=\lfloor \frac{\pi}{4\sin^{-1}{\left(\sqrt{\lambda_\beta}\right)}}-\frac{1}{2}\rfloor=\mathcal{O}(\lambda^{-1/2}_\beta)$, we are guaranteed that all $\sqrt{\beta_j'}\ge \sqrt{\lambda_\beta\beta_j}$. Though this improves the normalization, it also specifies an erroneous Hamiltonian as the matrix elements $\langle \chi_{\text{amp},j}|\hat{U}_\text{mix}|\psi_{\text{amp},k}\rangle$ are larger than those of $\hat{H}_{jk}$ by an index-dependent factor. In contrast, Amplitude multiplication in Thm.~\ref{Thm:Linear_Amplitude_Amplification} avoids this non-linearity and and allows us to boost the normalization of the encoded Hamiltonian with only an exponentially small distortion to its spectrum. This leads to \begin{lemma}[Uniform spectral amplification by multiplied state overlaps] \label{Thm:Ham_Encoding_Uniform_Amplification_State_Overlaps} Let the Hamiltonian $\hat{H}$ be encoded in the standard-form of Eq.~\ref{Eq:Level_Three_Encoding} with normalization $\alpha$. Given upper bounds $\Lambda_\beta \in [\lambda_\beta,1/2],\; \Lambda_\gamma \in [\lambda_\gamma,1/2]$ on the slowdown factors, and a target error $\epsilon\in(0,\text{min}\{\Lambda_\beta,\Lambda_\gamma\})$, the Hamiltonian $\hat{H}_{\text{lin}}$ can be encoded in standard-form with normalization $4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}$ such that $\|\hat{H}_{\text{lin}}-\hat{H}\|\le \frac{5}{4}\epsilon\|\hat{H}\|< \frac{5}{4}\alpha \epsilon\sqrt{\Lambda_\beta\Lambda_\gamma}$ using $Q=\mathcal{O}((\Lambda_\beta^{-1/2}+\Lambda_\gamma^{-1/2})\log{(1/\epsilon)})$ queries, $\mathcal{O}(Q \log{(d)})$ primitive gates, and $1$ additional ancilla qubit. \end{lemma} \begin{proof} Let us apply Thm.~\ref{Thm:Linear_Amplitude_Amplification}, which requires one additional ancilla qubit, to the state overlap model Eq.~\ref{Eq:State_Overlap_Model_imperfect_3_decomposed}. We identify $\hat{U}_{\text{col}}$ as the state preparation operator that prepares the target state marked by $\ket{0}_{a_2}$ with overlap $\sqrt{\lambda_\beta\beta_j}$. Assume that $\sqrt{\lambda_\beta}\le 1/2$, and let $\Lambda_\beta \in [\lambda_\beta,1/2]$ be an upper bound on the slowdown factor. Then there exists a quantum circuit $\hat{U}'_{\text{col}}$ that makes $Q_\beta=\mathcal{O}(\Lambda_\beta^{-1/2}\log{(1/\epsilon)})$ queries to $\hat{U}_{\text{col}}$ and uses $\mathcal{O}(Q_\beta\log{(d)})$ primitive gates, and similarly for $\hat{U}_{\text{row}}$, to prepare the states \begin{align} \label{Eq:State_Overlap_Model_amplified} \ket{\psi_{\text{lin},j}} &= \hat{U}'_{\text{col}}\ket{0}_a\ket{u_j}_s = \left(\sqrt{\frac{\lambda_\beta\beta_j}{4\Lambda_\beta}}(1+\epsilon_{\beta,j}) \ket{\bar \psi_j}_{a_1}\ket{0}_{a_2}+\cdots\ket{1}_{a_2}\right)\ket{u_j}_s, \\ \nonumber \ket{\chi_{\text{lin},j}} &= \hat{U}'_{\text{row}}\ket{0}_a\ket{u_j}_s = \left(\sqrt{\frac{\lambda_\gamma\gamma_j}{4\Lambda_\gamma}}(1+\epsilon_{\gamma,j}) \ket{\bar \chi_j}_{a_1}\ket{0}_{a_2}+\cdots\ket{2}_{a_2}\right)\ket{u_j}_s, \end{align} where $|\epsilon_{\beta,j}|,|\epsilon_{\gamma,j}|< \epsilon\in(0,\text{min}\{\Lambda_\beta,\Lambda_\gamma\})\le 1/2$ are state-dependent errors in the amplitude. Let us define the Hamiltonian $\hat{H}_{\text{lin}}$ encoded in in standard-form with normalization $4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}$ as follows \begin{align} \frac{\hat{H}_{\text{lin}}}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}}=(\bra{0}_a\otimes \hat{I}_s)\hat{U}'_{\text{row}}\hat{U}_{\text{mix}}\hat{U}'_{\text{col}}(\ket{0}_a\otimes \hat{I}_s)=\sum_{jk}\frac{\hat{H}_{jk}}{4\alpha}\frac{(1+\epsilon_{\gamma,j})(1+\epsilon_{\beta,k})}{\sqrt{\Lambda_\beta\Lambda_\gamma}}\ket{u_j}\bra{u_k}_s. \end{align} We may now evaluate the error of $\hat{H}_{\text{lin}}$ from that of the original Hamiltonian $\hat{H}$, following a similar approach from~\cite{Berry2012}. Let $\hat{\epsilon}_\beta$ be a diagonal matrix with elements $\epsilon_{\beta,j}$ and similarly for $\epsilon_{\gamma,j}$. Then \begin{align} \label{Eq:Overlap_Hamiltonian_Error} \hat{H}_{\text{lin}} &=\left(\hat{H}+\hat{\epsilon}_\gamma \hat{H}+\hat{H}\hat{\epsilon}_\beta + \hat{\epsilon}_\gamma\hat{H}\hat{\epsilon}_\beta \right), \\\nonumber \|\hat{H}_{\text{lin}}-\hat{H}\| &\le \|\hat{H}\|\left(\|\hat{\epsilon}_\beta\|+\|\hat{\epsilon}_\gamma\|+\|\hat{\epsilon}_\beta\|\|\hat{\epsilon}_\gamma\|\right)\le \|\hat{H}\|(2\epsilon+\epsilon^2) < \frac{5}{4}\|\hat{H}\|\epsilon < \frac{5}{4}\alpha \sqrt{\Lambda_\beta\Lambda_\gamma}\epsilon. \end{align} \begin{comment} \begin{align} \label{Eq:Overlap_Hamiltonian_Error} \frac{\hat{H}_{\text{lin}}}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}} &=\frac{1}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}}\left(\hat{H}+\hat{\epsilon}_\gamma \hat{H}+\hat{H}\hat{\epsilon}_\beta + \hat{\epsilon}_\gamma\hat{H}\hat{\epsilon}_\beta \right) \\\nonumber \left\|\frac{\hat{H}_{\text{lin}}}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}}-\frac{\hat{H}}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}}\right\| &\le \frac{\|\hat{H}\|}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}}\left(\|\hat{\epsilon}_\beta\|+\|\hat{\epsilon}_\gamma\|+\|\hat{\epsilon}_\beta\|\|\hat{\epsilon}_\gamma\|\right)\le \frac{\|\hat{H}\|(2\epsilon+\epsilon^2)}{4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}} \le \frac{2\epsilon+\epsilon^2}{4} \le \epsilon. \end{align} \end{comment} where the second-last inequality is due to $\epsilon < 1/2$, and the last inequality applies the upper bound $\|\hat{H}\|\le \alpha \sqrt{\Lambda_\beta\Lambda_\gamma}$. Summing up $Q=Q_\beta+Q_\gamma+1$ leads to the claimed query and gate complexities. \end{proof} Combining with Thm.~\ref{Thm:Ham_Sim_Qubitization} then furnishes the following result on Hamiltonian simulation. \begin{lemma}[Hamiltonian simulation by multiplied state overlaps] \label{Thm:HamSim_Amplified_Overlaps} Let the Hamiltonian $\hat{H}$ be encoded in the standard-form of Eq.~\ref{Eq:Level_Three_Encoding} with normalization $\alpha$. Given upper bounds $\Lambda_\beta \in [\lambda_\beta,1/2],\; \Lambda_\gamma \in [\lambda_\gamma,1/2]$ on the slowdown factors, $\Lambda\ge \|\hat{H}\|$, and a target error $\epsilon\in(0,\text{min}\{\Lambda_\beta,\Lambda_\gamma\})$, time-evolution $e^{-i\hat{H}t}$ be approximated with error $\epsilon$ using $Q=\mathcal{O}\Big(t\alpha(\sqrt{\Lambda_\beta}+\sqrt{\Lambda_\gamma})\log{(\frac{t\Lambda}{\epsilon})}+(\Lambda_\beta^{-1/2}+\Lambda_\gamma^{-1/2})\frac{\log{(1/\epsilon)}\log{(t\Lambda/\epsilon)}}{\log\log{(1/\epsilon)}}\Big)$ queries, $\mathcal{O}(Q\log{(d)})$ primitive gates, and $\mathcal{O}(1)$ additional ancilla qubits. \end{lemma} \begin{proof} From Lem.~\ref{Thm:Ham_Encoding_Uniform_Amplification_State_Overlaps}, we may encode $\hat{H}_\text{lin}$ in standard-form with normalization $4\alpha\sqrt{\Lambda_\beta\Lambda_\gamma}$ and error $\|\hat{H}_{\text{lin}}-\hat{H}\| =\mathcal{O}(\|\hat{H}\|\epsilon_0) =\mathcal{O}(\Lambda \epsilon_0)$. This requires $Q_0=\mathcal{O}((\Lambda_\beta^{-1/2}+\Lambda_\gamma^{-1/2})\log{(1/\epsilon_0)})$ queries to $\hat{U}_\text{row},\hat{U}_\text{mix},\hat{U}'_\text{col}$ and their inverses, $\mathcal{O}(Q_0 \log{(d)})$ primitive gates, and $1$ additional ancilla qubit. Using the fact $\|e^{i \hat A}-e^{i \hat B}\|\le \|\hat A-\hat B\|$, the error of $e^{-i\hat{H}_\text{lin}t}$ from ideal time-evolution is $\|e^{-i\hat{H}_\text{lin}t}-e^{-i\hat{H}t}\|\le \|\hat{H}_\text{lin}t-\hat{H}t\|=\mathcal{O}(t\Lambda\epsilon_0)$. By combining with Thm.~\ref{Thm:Ham_Sim_Qubitization}, time-evolution by $e^{-i\hat{H}_\text{lin}t}$ can be approximated with error $\epsilon_1$ using $Q_1=\mathcal{O}\big(t\alpha\sqrt{\Lambda_\beta\Lambda_\gamma} + \frac{\log{(1/\epsilon_1)}}{\log\log{(1/\epsilon_1)}}\big)$ queries to controlled-$\hat{U}'_\text{row}\hat{U}_\text{mix}\hat{U}'_\text{col}$ and its inverse, $\mathcal{O}(Q_1\log{(d)})$ additional primitive gates, and $\mathcal{O}(1)$ additional ancilla qubits. Thus time-evolution by $e^{-i\hat{H}t}$ can be approximated with error $\epsilon=\mathcal{O}(\epsilon_1+t\Lambda\epsilon_0)$ using $Q=Q_0Q_1$ queries to controlled-$\hat{U}_\text{row}, \hat{U}_\text{mix}, \hat{U}_\text{col}$ and their inverses, and $\mathcal{O}(Q_1\log{(d)}+Q_0Q_1\log{(d)})=\mathcal{O}(Q\log{(d)})$ primitive gates. We can control the error by choosing $\epsilon_1=\mathcal{O}(\epsilon)$ and $\epsilon_0 =\mathcal{O}(\epsilon/(t\Lambda))$. Substituting into $Q$ produces the claimed query complexity. \end{proof} In the asymptotic limit of large $t\gg \log{(1/\epsilon)}$, the query complexity may be simplified to $\mathcal{O}\Big(t\alpha(\sqrt{\Lambda_\beta}+\sqrt{\Lambda_\gamma})\log{(\frac{t\Lambda}{\epsilon})}\Big)$ queries. \subsection{Reduction to Sparse Matrices} \label{SubSec:Reduction_Sparse_Matrices} The results of Sec.~\ref{Sec:Ham_Sim_Overlaps}, presented in a general setting, apply to the special case of sparse matrices. The reduction follows by making three additional assumptions. First, assume that the dimension of $\ket{0}_a\in \mathbb{C}^{3n}$ is larger than that of $\ket{u_j}_s\in\mathbb{C}^n$. Second, assume that $\forall j\in[n],\;\ket{u_j}_s,$ is the computational basis $\ket{j}_s$. Third, we assume that there exists oracles in Def.~\ref{Def:Sparse_Oracle} that describe $d$-sparse matrices~\cite{Berry2012}: With these oracles and an upper bound $\Lambda_{\text{max}}\ge \|\hat{H}\|_{\text{max}}$, it is well-known that $\mathcal{O}(1)$ queries suffice to implement the isometry represented by $\hat{U}_{\text{row}}\ket{0}_a$ and $\hat{U}_{\text{col}}\ket{0}_a$ with output states \begin{align} \label{Eq:Sparse_Hmax_states} \hat{U}_{\text{col}}\ket{0}_a\ket{j}_s &= \ket{\psi_j}_{as} = \frac{1}{\sqrt{d}}\sum_{p\in F_{j}}\ket{j}_s\ket{p}_{a_1}\left(\sqrt{\frac{\hat{H}_{jp}}{\Lambda_{\text{max}}}}\ket{0}_{a_2}+\sqrt{1-\frac{|\hat{H}_{jp}|}{\Lambda_{\text{max}}}}\ket{1}_{a_2}\right), \\\nonumber \bra{0}_a\bra{k}_s\hat{U}^\dag_{\text{row}} &= \bra{\chi_k}_{as} = \frac{1}{\sqrt{d}}\sum_{q\in F_{k}}\bra{k}_{s}\bra{q}_{a_1}\left(\sqrt{\frac{\delta_{kq}\hat{H}_{kq}+(1-\delta_{kq})\hat{H}^*_{kq}}{\Lambda_{\text{max}}}}\bra{0}_{a_2}+\sqrt{1-\frac{|\hat{H}_{kq}|}{\Lambda_{\text{max}}}}\bra{2}_{a_2}\right), \\\nonumber \langle \chi_j |\hat{U}_{\text{mix}} |\psi_k \rangle&=\frac{\hat{H}_{jk}}{\alpha}=\frac{\hat{H}_{jk}}{d\Lambda_{\text{max}}}, \end{align} where $\delta_{jk}$ is the Kronecker delta function, and $F_j= \{k: k = f(j,l)\; , l\in[d]\}$ is the set of non-zero column indices in row $j$. Note that our definition of the isometry Eq.~\ref{Eq:Sparse_Hmax_states} is an improvement over~\cite{Berry2012} as it avoids ambiguity in both the principal range of the square-roots when $\hat{H}_{jk}<0$ and a sign problem when $\hat{H}_{jj}<0$. We also choose $\hat{U}_{\text{mix}}$ to swap the registers $s$ and $a_1$. From~\cite{Berry2012}, the gate complexity of $\hat{U}_{\text{col}}$, $\hat{U}_{\text{row}}$, and $\hat{U}_{\text{mix}}$ combined is $\mathcal{O}(\log{(n)}+\text{poly}(m))$, where $m=\mathcal{O}(\log{(t\|\hat{H}\|/\epsilon)})$ is the number of bits of precision of $\hat{H}_{jk}$. The contribution from $\text{poly}(m)=\mathcal{O}(m^{5/2})$ is due to integer arithmetic for for computing square-roots and trigonometric functions. This combined with Thm.~\ref{Thm:Ham_Sim_Qubitization} recovers the previous best result on sparse Hamiltonian simulation using $Q=\mathcal{O}\big(td\Lambda_{\text{max}}+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$ queries~\cite{Low2016HamSim}, and $\mathcal{O}(Q(\log{(n)}+\text{poly}(m)))$ primitive gates. To see how Thm.~\ref{Thm:HamSim_Amplified_Overlaps} improves on this, we rewrite Eq.~\ref{Eq:Sparse_Hmax_states} in the format of Eq.~\ref{Eq:State_Overlap_Model_imperfect} by collecting coefficients of the subspace marked by $\ket{0}_{a_2}$. \begin{align} \label{Eq:State_Overlap_Model_imperfect_collected} \ket{\psi_j}_{as}&=\sqrt{\frac{\sigma_j}{d\Lambda_{\text{max}}}}\left(\sum_{p\in F_{j}}\sqrt{\frac{\hat{H}_{jp}}{\sigma_j}}\ket{j}_s\ket{p}_{a_1}\right)\ket{0}_{a_2} + \cdots\ket{j}_s\ket{1}_{a_2}, \\ \nonumber \bra{\chi_k}_{as}&=\sqrt{\frac{\sigma_k}{d\Lambda_{\text{max}}}}\left(\sum_{q\in F_{k}}\sqrt{\frac{\delta_{kq}\hat{H}_{kq}+(1-\delta_{kq})\hat{H}^*_{kq}}{\sigma_k}}\bra{k}_{s}\bra{q}_{a_1}\right)\bra{0}_{a_2} + \cdots\bra{k}_s\bra{2}_{a_2}, \end{align} where $\sigma_j=\sum_k|\hat{H}_{jk}|$, and the induced one-norm $\|\hat{H}\|_1=\max_{j}\sigma_j$. Note that $\ket{\bar{\psi}_j}=\sum_{p\in F_{j}}\sqrt{\frac{\hat{H}_{jp}}{\sigma_j}}\ket{j}_s\ket{p}_{a_1}$, and similarly for $\ket{\bar{\chi}_j}$. From this, we obtain our main result on sparse Hamiltonian simulation Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified}. \begin{proof}[Proof of Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified}] Comparison of Eq.~\ref{Eq:State_Overlap_Model_imperfect_collected} with Eq.~\ref{Eq:State_Overlap_Model_imperfect} yields $\beta_j =\gamma_j = \frac{\sigma_j}{\|\hat{H}\|_1}$, $\lambda_\beta = \lambda_\gamma = \frac{\|\hat{H}\|_1}{d\Lambda_{\text{max}}}$. Thus we have the upper bound $\Lambda_\beta = \Lambda_\gamma = \frac{\Lambda_1}{d\Lambda_\text{max}}\ge \lambda_\beta=\lambda_\gamma$. Moreover, from Eq.~\ref{Eq:Sparse_Hmax_states}, the normalization constant $\alpha = d \Lambda_{\text{max}}$. The claimed query complexity is obtained by substitution into Cor.~\ref{Thm:HamSim_Amplified_Overlaps}. \end{proof} This result is quite remarkable as it strictly improves upon prior art, modulo logarithmic factors, by exploiting additional structural information. In the asymptotic limit of large $\Lambda_1 t \gg \log{(1/\epsilon)}$, the query complexity may be simplified to $\mathcal{O}\Big(t\sqrt{d\Lambda_{\text{max}}\Lambda_1}\log{(\frac{t\Lambda}{\epsilon})}\Big)$. Using the inequality $\|\hat{H}\|\le \|\hat{H}\|_1\le d\|\hat{H}\|_{\text{max}}$, the worst-case occurs when these norms are all equal thus $\Lambda=\Lambda_1=d\Lambda_{\text{max}}$. There, the query complexity of Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified} up to logarithmic factors is $\mathcal{O}(td\Lambda_{\text{max}})$, equal to that of prior art~\cite{Low2016HamSim}. However, the best-case $\|\hat{H}\|_1=\mathcal{O}(\|\hat{H}\|_{\text{max}})$ leads to a quadratic improvement in sparsity with query complexity of $\mathcal{O}(t\sqrt{d}\Lambda_{\text{max}})$, also ignoring logarithmic factors. Another approach implicit in~\cite{Berry2012} assumes that $\sigma_j$ are provided by the quantum oracle $\hat{O}_{C}\ket{j}_s\ket{z}_c=\ket{j}_s\ket{z\oplus \sigma_j}_c$ when queried the $j\in[n]$ row index. This allows us to exactly compensate for the sinusoidal non-linearity of amplitude amplification by modifying initial state amplitudes by some $j$-dependent multiplicative factor. Thus $\hat{H}$ may be encoded in standard-form with normalization $\mathcal{O}(\sqrt{d\Lambda_{\text{max}}\Lambda_1})$ exactly without any error, leading to a Hamiltonian simulation algorithm with query complexity $Q=\mathcal{O}\big(t(d\|\hat{H}\|_{\text{max}}\|\hat{H}\|_1)^{1/2}+\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\big)$. While improves on Thm.~\ref{Cor:Ham_Sim_Sparse_Amplified} by logarithmic factors, and matches the complexity Claim.~\ref{Claim:Sparse_Ham_Sim}, $\hat{O}_{C}$ is in general difficult to construct. \subsection{Lower Bound on Sparse Hamiltonian Simulation} \label{Sec:Lower_Bound} In this section, we prove the lower bound Thm.~\ref{Thm:Lower_Bound} on sparse Hamiltonian simulation, given information on the sparsity, max-norm, and induced one-norm. The lower bounds in prior art are obtained by constructing Hamiltonians that compute well-known functions. When applied to our situation, one obtains $\Omega(t\|\hat{H}\|_1)$ queries therough the $\text{PARITY}$ problem~\cite{Berry2015Hamiltonian}, and $\Omega(\sqrt{d})$ queries through $\text{OR}$~\cite{Berry2012}. This leads to an additive lower bound $\Omega(t\|\hat{H}\|_1+d)$. Using similar techniques, we obtain a stronger lower bound $\Omega(t(d\|\hat{H}\|_1)^{1/2})$ by creating a Hamiltonian that computes the solution to the composed function $\text{PARITY}\circ \text{OR}$. Specifically, we combine a Hamiltonian that solves $\text{PARITY}$ on $n$ bits with constant error using at least $\Omega(s\|\hat{H}\|_{\text{max}}t)$ queries, where $t=\Theta(\frac{n}{s\|\hat{H}\|_{\text{max}}})$, with a Hamiltonian that solves $\text{OR}$ on $m$ bits exactly, with the promise that at most $1$ bit is non-zero, using at least $\Omega(\sqrt{m})$ queries. Note that in all cases, the query complexity with respect to error is at least an additive term $\Omega(\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}})$~\cite{Berry2015Hamiltonian}. The Hamiltonian $\hat{H}_{\text{PARITY}}$ that solves $\text{PARITY}$ on $n$ bits is well-known~\cite{Berry2015Hamiltonian}, and is based on the Hamiltonian $\hat{H}_{\text{spin}}$ for perfect state transfer in spin chains. For completeness, we outline the procedure. Consider a Hamiltonian of $\hat{H}_{\text{spin}}$ dimension $n+1$, with matrix elements in the computational basis $\{\ket{j}_s:j\in[n+1]\}$ defined as \begin{align} \bra{j-1}_s\hat{H}_{\text{spin}}\ket{j}_s=\sqrt{j(N-j+1)}/N. \end{align} Note that this Hamiltonian has sparsity $1$, max-norm $\Theta(1)$, and $1$-norm $\Theta(1)$. Time evolution by this Hamiltonian $e^{-i\hat{H}_{\text{spin}}n\pi/2}\ket{0}_s=\ket{n}_s$ exactly transfers the state $\ket{0}$ to $\ket{n}$ in time $t=\frac{\pi N}{2}$ One way to speed up these dynamics is to uniformly increase the value of all matrix elements. However, any increase in $\|\hat{H}\|_{\text{max}}$ is trivial as it simply decreases $t$ by a proportionate amount. Another way is to boost the sparsity of $\hat{H}_{\text{spin}}$ by taking a tensor product with a Hamiltonian $\hat{H}_{\text{complete}}$ of dimension $s$ where all matrix elements are $1$ in the computational basis $\{\ket{j}_c:j\in[s]\}$. \begin{align} \bra{i}_c\hat{H}_{\text{complete}}\ket{j}_c=1, \quad \forall i\in[s],\; j\in[s]. \end{align} One the eigenstates of $\hat{H}_{\text{complete}}$ is the uniform superposition $\ket{u}_c=\frac{1}{\sqrt{s}}\sum_{j\in[s]}\ket{j}_c$ with eigenvalue $\hat{H}_{\text{complete}}\ket{u}_c=s\ket{u}_c$. Thus we define the Hamiltonian \begin{align} \hat{H}_{sc}=\hat{H}_{\text{spin}}\otimes \hat{H}_{\text{complete}}. \end{align} Note that the $\hat{H}_{sc}$ has sparsity $s$, max-norm $\Theta(1)$, and $1$-norm $\Theta(s)$. One can see that $\hat{H}_{sc}$ perform faster state transfer like $e^{-i\hat{H}_{sc}N\pi/(2s)}\ket{0}_s \ket{u}_c=\ket{n}_s \ket{u}_c$ in time $t=\frac{\pi n}{2 s}$. We find it useful to define the state $\ket{j}_{sc}= \ket{j}_s \ket{u}_c$. Adding another qubit to this composite Hamiltonian together with some slight modification solves ${\text{PARITY}}$. Given an $n$-bit string $x=x_0x_2...x_{n-1}$, let us consider the Hamiltonian of dimension $2$ that computes the $\text{NOT}$ function on the computational basis $\{\ket{j}_{\text{output}}:j\in[2]\}$, \begin{align} \hat{H}_{\text{NOT},j}=\left( \begin{matrix} x_j \oplus 1 & x_j \\ x_j & x_j \oplus 1 \end{matrix} \right). \end{align} One can see that $\hat{H}_{\text{NOT},j}\ket{0}_\text{output}=\ket{1}_\text{output}$ and $\hat{H}_{\text{NOT},j}\ket{1}_\text{output}=\ket{0}_\text{output}$, as expected of a $\text{NOT}$ function. In the basis $\ket{j}_{sc}$, we define the Hamiltonian \begin{align} \label{Eq:Ham_Parity} \hat{H}_{\text{PARITY}}=\left(\sum_{j\in[n]}\frac{\sqrt{j(N-j+1)}}{N}\ket{j+1}\bra{j}_{sc}\otimes \hat{H}_{\text{NOT},j}\right) + \text{Hermitian conjugate}. \end{align} This Hamiltonian also performs perfect state transfer, but since the path of each transition between the states $\ket{0}_\text{output}$ and $\ket{1}_\text{output}$ are gates by a $\text{NOT}$ function on the bit $x_j$, the output state of time-evolution $e^{-i\hat{H}_{\text{PARITY}}N\pi/(2s)}\ket{0}_s\ket{u}_c\ket{0}_\text{output}=\ket{n}_s\ket{u}_c\ket{\bigoplus_j x_j}_\text{output}$. In the computational basis, $\hat{H}_{\text{PARITY}}$ has sparsity $2s$, max-norm $\Theta(1)$, and $1$-norm $\Theta(s)$. Even though $\hat{H}_{\text{NOT},j}$ has only one non-zero element, the sparsity increases by factor $2$ as we cannot compute beforehand the column index the non-zero. Thus measuring the $\text{output}$ register returns the parity of $x$ \begin{align} \text{PARITY}(x)=\bigoplus_{j=0}^{n-1} x_j, \end{align} after evolving for time $t=\frac{\pi n}{2 s}$. It is well-known that the parity on $n$ bits cannot be computed with less than $\Omega(n)$ quantum queries, thus the query complexity of simulating time-evolution by $\hat{H}_{\text{PARITY}}$ for time $t$ is at least $\Omega(ts)$. As sparsity and $1$-norm exhibit the same scaling and in general $\|\hat{H}\|_{1}\le d\|\hat{H}\|_{\text{max}}$, the more accurate statement here if given information on $\|\hat{H}\|_1$ is the lower bound of $\Omega(t\|\hat{H}\|_1)$ queries. In constrast, the lower bound of~\cite{Berry2015Hamiltonian} quotes $\Omega(\text{sparsity} \times t)$ as they consider the case where one is given information only on the sparsity.. We now present the extension to creating a Hamiltonian that solves $\text{PARITY}\circ \text{OR}$. Notably, this Hamiltonian allows one to vary sparsity and $1$-norm independently. \begin{proof}[Proof of Thm.~\ref{Thm:Lower_Bound}] The first step is construct a Hamiltonian that solves the $\text{OR}$ function on $m$ bits $x_{0}x_{1}...x_{m-1}$, promised that at most $1$ bit is non-zero. This Hamiltonian of dimension $2m$, in the computational basis $\{\ket{k}_{\text{output}}\ket{j}_{o}:k\in[2],j\in[m]\}$, is \begin{align} \hat{H}_{\text{OR}}=\left( \begin{array}{c|c} \hat{C}_{ 1} & \hat{C}_{0} \\ \hline \hat{C}^\dag_{0} & \hat{C}_{1} \end{array} \right). \end{align} Note that our construction is based on a modification of~\cite{Berry2014}, where $\hat{C}_{1}$ there is zero matrix. Here, $\hat{C}_{1}$ mimics the top-left component of $\hat{H}_{\text{NOT}}$ in that is performs a bit-flip on the output register if $\text{OR}(x)=0$, and $\hat{C}_{0}$ mimics the top-right component of $\hat{H}_{\text{NOT}}$ in that it performs a bit-flip on the output register if $\text{OR}(x)=1$. These matrices are defined as follows: \begin{align} \hat{C}_{0}=\left( \begin{array}{cccc} x_0 & x_1 & \cdots &x_{m-1} \\ x_{m-1} & x_{0} & \cdots & x_{m-2}\\ x_{m-2} & x_{m-1} & \cdots & x_{m-3}\\ \vdots & \vdots & \ddots & \vdots \\ x_{1} & x_{2} & \cdots & x_{0} \end{array} \right), \quad \hat{C}_{1} = \frac{1}{m}\left( \begin{array}{cccc} 1 & 1 & \cdots &1 \\ 1 & 1 & \cdots &1 \\ \vdots & \vdots & \ddots &\vdots \\ 1 & 1 & \cdots &1 \end{array} \right) -\frac{\hat{C}_{0}+\hat{C}_{0}^\dag}{2}. \end{align} Note that the non-Hermitian matrix $\hat{C}_{0}$ has rows formed from cyclic shifts of $x$, whereas $\hat{C}_{1}$ is Hermitian. Let us define the uniform superposition $\ket{u}_o=\frac{1}{\sqrt{m}}\sum_{j\in[m]}\ket{j}_o$. It is easy to verify that if at most one bit in $x$ is non-zero, $\hat{C}_{0}\ket{u}_o=\text{OR}(x)\ket{u}_o$. Similarly, $\hat{C}_{1}\ket{u}_o=(\text{OR}(x)\oplus 1)\ket{u}_o$. Thus $\hat{H}_{\text{OR}}\ket{j}_\text{output}\ket{u}_o=\ket{j\oplus \text{OR}(x)}_\text{output}\ket{u}_o$. Note that $\hat{H}_{\text{OR}}$ has sparsity $2m$, max-norm $\Theta(1)$, and $1$-norm $\Theta(1)$. Given an $nm$-bit string $x_{0,0}x_{0,1}...x_{0,m-1}x_{1,0}...x_{n-1,m-1}$, the Hamiltonian $\hat{H}_{\text{PARITY}\circ \text{OR}}$ that computes the $n$-bit $\text{PARITY}$ of a number $n$ of $m$-bit $ \text{OR}$ functions is similar to $\hat{H}_{\text{PARITY}}$ in Eq.~\ref{Eq:Ham_Parity}, except that instead of composing with $\text{NOT}$ Hamiltonians defined by the bit $x_j$ for each $j\in[n]$, we compose with $\text{OR}$ Hamiltonians defined by the bits $x_{j,0}x_{j,1}...x_{j,m-1}$ for each $j\in[n]$. By defining $\hat{H}_{\text{OR},j}$ as the Hamiltonian defined by those bits, \begin{align} \label{Eq:Ham_Parity_OR} \hat{H}_{\text{PARITY}\circ \text{OR}}=\left(\sum_{j\in[n]}\frac{\sqrt{j(N-j+1)}}{N}\ket{j+1}\bra{j}_{sc}\otimes \hat{H}_{\text{OR},j}\right) + \text{Hermitian conjugate}. \end{align} On the input state $\ket{0}_s\ket{u}_c\ket{u}_o\ket{0}_\text{output}$, the output of time-evolution $e^{-i\hat{H}_{\text{PARITY}}N\pi/(2d)}\ket{0}_s\ket{u}_c\ket{u}_o\ket{0}_\text{output}=\ket{n}_s\ket{u}_c\ket{u}_o\ket{\bigoplus_j \text{OR}(x_{j,0}x_{j,1}...x_{j,m-1})}_\text{output}$. Thus measuring the $\text{output}$ register returns the parity of $x$ \begin{align} \text{PARITY}\circ \text{OR}(x)=\bigoplus^{n-1}_{j=0} \text{OR}(x_{j,0}x_{j,1}...x_{j,m-1}), \end{align} after time-evolution by $t=n\pi/(2s)$. Note that $\hat{H}_{\text{PARITY}\circ \text{OR}}$ has sparsity $d=2sm$, max-norm $\Theta(1)$, and $1$-norm $\Theta(s)$. It is well-known that the constant-error quantum query complexity of $\text{PARITY}\circ \text{OR}$~\cite{Reichardt2011reflections} is the product of the query complexity of $\text{PARITY}$ with that of $\text{OR}$. As at least $\Omega(\sqrt{m})$ queries are required to compute the $\text{OR}$ of $m$ bits, $\text{PARITY}\circ \text{OR}(x)$ requires at least $\Omega(n\sqrt{m})$ queries. Thus any algorithm for simulating time-evolution by $\hat{H}_{\text{PARITY}\circ \text{OR}}$ requires at least $\Omega(n\sqrt{m})=\Omega(t\sqrt{ds})$ queries. \end{proof} \section{Universality of the Standard-Form} \label{Sec:Equivalence_Sim_Mea} We now establish an equivalence between simulation and measurement that justifies our focus on directly manipulating the standard-form encoding of structured Hamiltonians. This equivalence, proven using Thm.~\ref{Thm:Standard_Form_From_Ham_Sim}, allows us to interconvert quantum circuits that implement time-evolution $e^{-i\hat{H}}$ for $\|\hat{H}\|=\mathcal{O}(1)$ and quantum circuits that implement measurement $\|\hat{H}\|$ with only a query overhead logarithmic overhead in error, and a constant overhead in space. An application of this result to Hamiltonian simulation is Cor.~\ref{Cor:HamExponentials} for Hamiltonians that is a sum of Hermitian terms, given access only to their exponentials. An intuitive picture of when simulation is possible emerges by interpreting the standard-form matrix encoding Def.~\ref{Def:Standard_Form} as a quantum circuit that implements a measurement. To see this explicitly, consider a Hermitian matrix encoded in standard-form-$(\hat{H},\alpha,\hat{U},d)$. Thus for any arbitrary input state $\ket{\psi}_s\in \mathcal{H}_a$, the standard-form applies \begin{align} \hat{U}\ket{G}_a\ket{\psi}_s=\frac{1}{\alpha}\ket{G}_a\hat{H}\ket{\psi}_s+\ket{\Phi}_{as}, \quad |\bra{\Phi}_{as}(\ket{G}_a\otimes\hat{I}_s) |= 0, \end{align} Note that in this section, we find it helpful to leave $\ket{G}$ explicit, similar to Sec.~\ref{Sec:Standard-form_QSP}. So upon measurement outcome $\ket{G}$ on the ancilla, which occurs with best-case probability $\max_{\ket{\psi}\in\mathcal{H}_s}|\frac{\hat{H}}{\alpha}\ket{\psi}|^2=(\|\hat{H}\|/\alpha)^2$, the measurement operator $\hat{H}/\alpha$ is implemented on the system. As all measurement outcomes orthogonal to $\ket{G}$ do not concern us, we represent their output with some orthogonal unnormalized quantum state $\ket{\Phi}_{as}$. Combined with the Hamiltonian simulation by qubitization results of Thm.~\ref{Thm:Ham_Sim_Qubitization}, one concludes that whenever one has access to a quantum circuit that implements a generalized measurement with measurement operator $\hat{H}/\alpha$ corresponding to one of the measurement outcomes, time-evolution using $\mathcal{O}\left(t\alpha +\frac{\log{(1/\epsilon)}}{\log\log{(1/\epsilon)}}\right)$ queries is possible. The converse of approximating measurements given $e^{-i\hat{H}t}$ is a standard application of quantum phase estimation. The proof sketch is (1) assume $t$ is chosen such that $\|\hat{H}t\| \le c \le 1$ for some absolute constant $c$ and define $\hat{H}'=\hat{H}t$. (2) Perform quantum phase estimation using $\mathcal{O}(1/\epsilon)$ queries to controlled $e^{-i\hat{H}t}$ to encode the eigenphases $\lambda$ of its eigenstates $\hat{H}'\ket{\lambda}=\lambda\ket{\lambda}$ to precision $\epsilon$ in binary format $\tilde \lambda$ in an $m$-qubit ancilla register $\mathcal{H}_b$, where $m=\mathcal{O}(\log{(1/\epsilon)})$. (3) Perform a controlled rotation on the single-qubit ancilla $\ket{0}_a$ to reduce the amplitude of $\ket{\lambda}$ by factor $\tilde\lambda$. (4) Uncompute the binary register by running quantum phase estimation in reverse. This implements the sequence. \begin{align} \label{Eq:Standard_form_QPE} \ket{0}_b\ket{0}_a\ket{\lambda}_s &\rightarrow \ket{\tilde\lambda}_b\ket{0}_a\ket{\lambda}_s \rightarrow \ket{\tilde\lambda}_b\left(\tilde\lambda\ket{0}_a+\sqrt{1-|\tilde\lambda |^2}\ket{1}_a)\right)\ket{\lambda}_s \\\nonumber &\rightarrow \ket{0}_b\left(\tilde\lambda\ket{0}_a+\sqrt{1-|\tilde\lambda |^2}\ket{1}_a)\right)\ket{\lambda}_s. \end{align} Thus projecting onto the state $\ket{0}_b\ket{0}_a$ implements the measurement operator $\hat{H}'$ with error $\max_\lambda|\lambda - \tilde\lambda| = \mathcal{O}(\epsilon)$, and best-case success probability $\|\hat{H}'\|$. As Eq.~\ref{Eq:Standard_form_QPE} is a standard-form encoding of $\hat{H}/\alpha$ with the signal unitary defined by steps (2-4), this establishes one direction in the equivalence between measurement and simulation up to polynomial error and logarithmic space. Ignoring these factors, our study of Hamiltonian simulation reduces to that of generalized measurements except in one edge case: this equivalence does not hold with respect to $t$ when $e^{-i\hat{H}t}$ can be simulated with $o(t)$ queries. However, this case is less interesting as no-fast-forwarding theorems~\cite{Childs2010Limitation} show that $\Omega(t)$ queries are necessary for Hamiltonians that solve generic problems. We strengthen this equivalence in the opposite direction Thm.~\ref{Thm:Standard_Form_From_Ham_Sim} for approximating measurement operators $\hat{H}'$ using $\log{(1/\epsilon)}$ queries to $e^{-i\hat{H}'}$ and $\mathcal{O}(1)$ ancilla qubits. The idea is to using quantum signal processing techniques to approximate two operator transformations: $\hat{H}_1=\frac{i}{2}(e^{-i\hat{H}'}-e^{i\hat{H}'})$, $\hat{H}_2 = \sin^{-1}{(\hat{H}_1)}$. Thus $\sin^{-1}\left(\frac{i}{2}(e^{-i\hat{H}'}-e^{i\hat{H}'})\right)=\hat{H}'$. All that remains is finding a degree $n$ polynomial approximation to $\sin^{-1}(x)$ with uniform error $n=\mathcal{O}(\log(1/\epsilon))$. However, this seems impossible -- $\sin^{-1}(x)$ is not analytic at $x=\pm 1$, thus its uniform polynomial approximation has degree $n=\mathcal{O}(\text{poly}(1/\epsilon))$. Fortunately, this can be overcome due to the restricted domain $\|\hat{H}t\| \le c$. \begin{lemma}[Polynomial approximation to $\sin^{-1}(x)$] \label{Lem.Polynomial_arcsin} $\forall\;\epsilon \in (0, \mathcal{O}(1)]$, there exists an odd polynomial $p_{\text{arcsin},n}$ of degree $n=\mathcal{O}(\log{(1/\epsilon)})$ such that \begin{align} \max_{ x \in [-1/2,1/2]} \left|p_{\text{arcsin},n}(x)-\sin^{-1}{(x)}\right|\le \epsilon,\quad \text{and}\quad \max_{ x \in [-1,1]} \left|p_{\text{arcsin},n}(x)\right|\le 1. \end{align} \end{lemma} \begin{proof} We restate Thm.~3 of~\cite{Saff1989polynomial} by Saff and Totik: Let $\beta$ be any number satisfying $\beta > 1$ and let $f\in C^k[-1,1]$ be a piecewise analytic function on $m>0$ closed intervals $[-1,1]=\bigcup^{m}_{j=0}[x_j,x_{j+1}]$, $-1=x_0<x_1<\cdots < x_{m-1}<x_m=1$, where the restriction of $f$ to any of the closed intervals $[x_j,x_{j+1}]$ is analytic, and $f$ is not analytic at each point $x_1,\cdots ,x_{m-1}$. Then there exists constants $g,G>0$ that depend only on $f$, and degree $n>0$ polynomials $p_n$ such that for every $x\in[-1,1]$, $|p_n(x)-f(x)|\le \frac{G}{n^{k+1}}e^{-g n d^{\beta}(x)}$, where $d(x)=\min_{0<j<m}|x-x_j|$. Let us now apply this theorem. Define the function \begin{align} f_{\text{arcsin}}(x)= \begin{cases} \sin^{-1}(x), & x \in[ -3/4,3/4], \\ \text{sgn}(x)\sin^{-1}(3/4) & \text{otherwise}, \end{cases} \end{align} where $\text{sgn}(x)=\pm x$. $f_{\text{arcsin}}(x)$ is continuous but not differentiable at $x=\pm 3/4$. Thus $f\in C^0[-1,1]$, $\max_{x\in[-1/2,1/2]} d(x)\ge 1/4$, and there exist absolute constants $G',g'>0$ and polynomials $p_n$ such that $\max_{x\in[-1/2,1/2]}|p_n(x)-f_{\text{arcsin}}(x)|\le \frac{G'}{n}e^{-g' n / 4^\beta}=\epsilon$. Hence $n = \mathcal{O}(\log{(1/\epsilon)})$. Since $e^{-g' n d^{\beta}(x)} \le 1$ and $|\sin^{-1}(3/4)|< 0.85$, there exists a constant $n_0> 0$ such that for all $n>n_0$, $\max_{x\in[-1,1]}|f_{\text{arcsin}}(x)-p_n(x)|\le 0.15$ thus $|p_n(x)|\le 1$. If $p_n(x)$ is not odd, replace it with its antisymmetric component $p_n\leftarrow \frac{p_n(x)-p_n(-x)}{2}$ which is odd with at worst the same error. Now let $p_{\text{arcsin},n} = p_n$. \end{proof} We now apply this polynomial approximation of $\sin^{-1}(x)$ to the proof of Thm.~\ref{Thm:Standard_Form_From_Ham_Sim}. \begin{proof}[Proof of Thm.~\ref{Thm:Standard_Form_From_Ham_Sim}] The transformation from time evolution $e^{-i\hat{H}t}$ to measurement $\hat{H}t$ takes three steps. First, encode the Hermitian operator $\hat{H}_1=\sin{(\hat{H}t)}$ in standard-form. This can be done with one query to the controlled time-evolution operator $\hat{U}_0=\ket{0}\bra{0}\otimes \hat{I} + \ket{1}\bra{1}\otimes e^{-i\hat{H}t}$ and its inverse $\hat{U}_0^\dag$: \begin{align} \hat{U}_1&= \hat{U}^\dag_0(\hat{\sigma}_x\otimes \hat{I})\hat{U}_0 = \ket{1}\bra{0}\otimes e^{i\hat{H}t} + \ket{0}\bra{1}\otimes e^{-i\hat{H}t}, \quad \ket{G}=e^{i\hat{\sigma}_x\pi/4}\ket{0}, \\\nonumber \hat{H}_1&=(\bra{G}\otimes \hat{I})\hat{U}_1(\ket{G}\otimes \hat{I})=\sin{(\hat{H}t)}. \end{align} Second, approximate $\hat{H}_2=\sin^{-1}(\hat{H}_1)$ using quantum signal processing. As the polynomial $p_{\text{arcsin},N}(x)$ of Lem.~\ref{Lem.Polynomial_arcsin} satisfies the conditions of Thm.~\ref{Thm:QSP_B}, the operator transformation $\hat{H}_{\text{lin}}t=p_{\text{arcsin},N}[\hat{H}_1]$ can be implemented exactly with $\mathcal{O}(N)$ queries to $\hat{U}_0$. This encodes $\hat{H}_{\text{lin}}t$ in standard-form with normalization $1$. Now choose $t$ such that $\|\hat{H}t\|\le c = 1/2$. Then $\|\sin{(\hat{H}t)}\|\le \|\hat{H}t\| \le 1/2$ as $\sin(x)\le x$. Third, evaluate the approximation error using Lem.~\ref{Lem.Polynomial_arcsin}. $\|\hat{H}_{\text{lin}}t-\hat{H}t\| \le \max_{x\in [-1/2,1/2]}|p_{\text{arcsin},N}(x)-\sin^{-1}(x)| \le \epsilon$, for $N=\mathcal{O}(\log{(1/\epsilon)})$. \end{proof} Incidentally, the equivalance between simulation and measurement also provides a simulation algorithm for Hamiltonians built from a sum of $d$ Hemritian component $\hat{H}=\sum^d_{j=1}\hat{H}_j$, where one only has access to these components through an oracle for their controlled exponentials $e^{-i\hat{H}_j t_j}$, for any $t_j\in\mathbb{R}$. Though results with similar scaling can be obtained through the techniques of compressed fractional queries~\cite{Berry2014}, this approach has two main advantages. First, the queries $\hat{H}_j$ are not restricted to only have eigenvalues $\pm 1$. Second, it is significantly simpler both in concept and in implementation. \begin{proof}[Proof of Cor.~\ref{Cor:HamExponentials}] From Thm.~\ref{Thm:Standard_Form_From_Ham_Sim}, $\mathcal{O}(\log(1/\epsilon_1))$ queries to $\hat{U}$ suffice to encode $\hat{H}_{\text{controlled}}=\sum^d_{j=1}\ket{j}\bra{j}_a\otimes \hat{H}'_j=(\bra{G'}_b\otimes\hat{I}_{as})\hat{U}'(\ket{G'}_b\otimes\hat{I}_{as})$ in standard-form with some state $\ket{G'}_b$ and signal oracle $\hat{U}'$, where $\max_{j}\|\hat{H}'_j-\hat{H}_j\|\le \epsilon_1$ and $\hat{H}_{\text{controlled}}$ acts on the system register $s$. Thus $(\bra{G}_a\bra{G'}_b\otimes\hat{I}_s)\hat{U}'(\ket{G}_a\ket{G'}_b\otimes\hat{I}_s)=\hat{H}_{\text{approx}}/\alpha$ encodes $\hat{H}_{\text{approx}}$ in standard-form where $\|\hat{H}_{\text{approx}}-\hat{H}\|=\|\sum^d_{j=1}\alpha_j(\hat{H}'_j-\hat{H}_j)\|\le \sum^d_{j=1}\alpha_j\|\hat{H}'_j-\hat{H}_j\|\le \alpha \epsilon_1$. Using the fact $\|e^{i \hat A}-e^{i \hat B}\|\le \|\hat A-\hat B\|$~\cite{Berry2014}, we have $\|e^{-i\hat{H}'t}-e^{-i\hat{H}t}\|\le t \alpha \epsilon_1$. By applying Thm.~\ref{Thm:Ham_Sim_Qubitization}, $e^{-i\hat{H}_{\text{approx}}t}$ can be approximated with error $\epsilon_2$ using $\mathcal{O}(t \alpha+\frac{\log{(1/\epsilon_2)}}{\log\log{(1/\epsilon_2)}})\mathcal{O}(\log(1/\epsilon_1))$ queries to $\hat{U}$. By the triangle inequality, this approximates $e^{-i\hat{H}t}$ with error $\le t\alpha\epsilon_1+\epsilon_2$. Thus choose $\epsilon_1 = \frac{\epsilon}{2t\alpha}$ and $\epsilon_2=\epsilon/2$. \end{proof} \section{Conclusions} \label{Sec:Amp_concluson} We have combined ideas from qubitization and quantum signal processing to solve, in a general setting, the uniform spectral amplification problem of implementing a low-distortion expansion of the spectrum of Hamiltonians. One most surprising application of our results is the simulation of sparse Hamiltonians where we obtain an algorithm with linear complexity in $\mathcal{O}(t(d\Lambda_{\text{max}}\Lambda_1)^{1/2})$, excluding logarithmic factors. This is particularly important as the best-case scaling $\mathcal{O}(\sqrt{d})$ is essential to an optimal realization of the fundamental quantum search algorithm. However, this improvement also appears impossible as prior art claims that $\Theta(td\|\hat{H}\|_{\text{max}})$ queries is optimal. Nevertheless, the two are actually consistent. In the situation where information on $\|\hat{H}\|_1$ is unavailable, previous results are recovered as one may simply choose the worst-case $\Lambda_1=d\Lambda_{\text{max}}=d\|\hat{H}\|_{\text{max}}$. This naturally leads to the question of whether further improvement is possible. For instance, if information on $\|\hat{H}\|$ rather than $\|\hat{H}\|_1$ is made available, our lower bound is consistent with the stronger statement of $\Omega(t(d\|\hat{H}\|_{\text{max}}\|\hat{H}\|)^{1/2})$ queries. More generally, the universality of our results motivates related future directions. Thus far, a large number of common oracles used to describe Hamiltonians to quantum computers map to the standard-form without much difficultly. Rather than focusing on improving Hamiltonian simulation algorithms, perhaps an emphasis on improving the quality of encoding, through a reduced normalization constant, would be more insightful, easier, and also lead to greater generality. Combined with the extremely low overhead of our techniques, algorithms obtained in this manner could be practical on digital quantum computers sooner rather than later. \section{Acknowledgments} G.H. Low is funded by the NSF RQCC Project No.1111337 and ARO quantum algorithms project. We thank Aram Harrow and Robin Kothari for suggesting $\text{PARITY}\circ\text{OR}$ as a possible lower bound. \appendix \section{Polynomial Approximations to a Truncated Linear Function} \label{Sec:Polynomials_Amplitude_Multiplication} The proof of Thm.~\ref{Cor:Operator_Amplification} and Thm.~\ref{Thm:Linear_Amplitude_Amplification} require a polynomial approximation $p_{\text{lin},\Gamma,n}$ to the truncated linear function \begin{align} \label{Eq:Linear_target_function_Appendix} f_{\text{lin},\Gamma}(x)= \begin{cases} \frac{x}{2\Gamma}, & |x| \in [0, \Gamma], \\ \in [-1,1], & |x| \in (\Gamma,1]. \end{cases} \end{align} The remainder of this section is dedicated to constructively proving the existence of $p_{\text{lin},\Gamma,n}$ with the following properties: \begin{theorem}[Polynomial for linear amplitude amplification] \label{Thm.Polynomial_LAA} $\forall\; \Gamma \in [0,1/2]$, $\epsilon \in(0, \mathcal{O}(\Gamma)]$, there exists an odd polynomial $p_{\text{lin},\Gamma,n}$ of degree $n=\mathcal{O}(\Gamma^{-1}\log{(1/\epsilon)})$ such that \begin{align} \forall\; {x\in[- \Gamma,\Gamma]},\; \left|p_{\text{lin},\Gamma,n}(x)-\frac{x}{2\Gamma}\right|\le \frac{\epsilon|x|}{2\Gamma} \quad\text{and}\quad \max_{x\in [-1,1]}|p_{\text{lin},\Gamma,n}(x)|\le 1. \end{align} \end{theorem} As close-to-optimal uniform polynomials approximations may be obtained by the Chebyshev truncation of entire functions, our strategy is to find an entire function $f_{\text{lin},\Gamma,\epsilon}$ that approximates $f_{\text{lin},\Gamma}$ over the domain $x\in[-\Gamma,\Gamma]$ with error $\epsilon$. We construct $f_{\text{lin},\Gamma,\epsilon}(x)$ in three steps. First, approximate the sign function $\text{sgn}(x)$ with an error functions, which is entire. Second, approximate the rectangular function $\text{rect}(x)$ with a sum of two error function $\frac{1}{2}\left(\text{erf}(k (x+\delta))+\text{erf}(k (-x+\delta))\right)$. Third, multiply this by $\frac{x}{2\Gamma}$ to approximate $f_{\text{lin},\Gamma,\epsilon}(x)$ with some error $\epsilon$. The approximation error of this sequence is described by Lems.~\ref{Lem:Entire_Sgn}, \ref{Lem:Entire_Rect}, \ref{Lem:Entire_Linear}: \begin{lemma}[Entire approximation to the sign function $\text{sgn}(x)$] \label{Lem:Entire_Sgn} $\forall\;\kappa > 0, x\in\mathbb{R},\epsilon\in(0,\sqrt{2/e\pi}]$, let $k = \frac{\sqrt{2}}{\kappa}\log^{1/2}{(\frac{2}{\pi\epsilon^2})}$. Then the function $f_{\text{sgn},\kappa,\epsilon}(x)=\text{erf}(kx)$ satisfies \begin{align} \begin{aligned} 1&\ge|f_{\text{sgn},\kappa,\epsilon}(x)|, \\ \epsilon&\ge\max_{|x|\ge \kappa/2}|f_{\text{sgn},\kappa,\epsilon}(x)-\text{sgn}(x)|, \end{aligned} \quad \begin{aligned} \text{sgn}(x)= \begin{cases} 1, & x > 0, \\ -1, & x < 0, \\ 1/2, & x = 0. \end{cases} \end{aligned} \end{align} \end{lemma} \begin{proof} We apply elementary upper bounds on the complementary error function $\text{erfc}(x)=1-\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_x^\infty e^{-y^2}dy\le \frac{2}{\sqrt{\pi}}\int_x^\infty\frac{y}{x} e^{-y^2}dy=\frac{1}{x\sqrt{\pi}}e^{-x^2}$ for any $x>0$. Thus $\max_{x\ge \kappa/2}|\text{erf}(kx)-1|\le \frac{2}{k\kappa\sqrt{\pi}}e^{-(k\kappa)^2/4}=\epsilon$ and similarly for $x \le -\kappa/2$. This is solved by $k=\frac{1}{\kappa}\sqrt{2W(\frac{2}{\pi\epsilon^2})}$ where $W(x)$ is the Lambert-$W$ function. From the upper bound $\log{x}-\log{\log{x}}\le W(x)\le \log{x}-\frac{1}{2}\log{\log{x}}$ for $x\ge e$~\cite{Hoorfar2008LambertW}, any choice of $k \ge \frac{\sqrt{2}}{\kappa}\log^{1/2}{(\frac{2}{\pi\epsilon^2})}\ge \frac{\sqrt{2}}{\kappa}$ where $\frac{2}{\pi\epsilon^2}\ge e$ ensures that $\text{erf}(k x)$ is close to $\pm 1$ over $x\ge\kappa/2$. \end{proof} \begin{lemma}[Entire approximation to the rect function] \label{Lem:Entire_Rect} $\forall\;\kappa > 0,\; w>0,\; x\in\mathbb{R},\;\epsilon\in(0,\sqrt{2/e\pi}]$, let $k = \frac{\sqrt{2}}{\kappa}\log^{1/2}{(\frac{2}{\pi\epsilon^2})}, \delta = (w+\kappa)/2$. Then the function $f_{\text{rect},w,\kappa,\epsilon}(x)=\frac{1}{2}\left(\text{erf}(k (x+\delta))+\text{erf}(k (-x+\delta))\right)$ satisfies \begin{align} \begin{aligned} 1 &\ge |f_{\text{rect},w, \kappa,\epsilon}(x)|, \\ \epsilon &\ge \max_{|x| \in [0,w/2]\cup[w/2+\kappa,\infty]} |f_{\text{rect},w,\kappa,\epsilon}(x)-\text{rect}(x/w)|, \end{aligned} \quad \text{rect}(x)= \begin{cases} 1, & |x| < 1/2, \\ 0, & |x| > 1/2, \\ 1/2, & |x| = 1/2. \end{cases} \end{align} \end{lemma} \begin{proof} This follows from the definition of the rect function $\text{rect}(x/w)=\frac{1}{2}(\text{sgn}(x+w/2)+\text{sgn}(-x+w/2))$. Thus we choose $\delta = (w+\kappa)/2$ and apply the error estimates of Lem.~\ref{Lem:Entire_Sgn}. \end{proof} \begin{lemma}[Entire approximation to the truncated linear function] \label{Lem:Entire_Linear} $\forall\;\Gamma > 0,\; x\in\mathbb{R},\;\epsilon\in(0,\sqrt{2/e\pi}]$, the function $f_{\text{lin},\Gamma,\epsilon}(x)=\frac{x}{2\Gamma}f_{\text{rect},2\Gamma, 2\Gamma,\epsilon}(x)$ satisfies \begin{align} |f_{\text{lin},\Gamma,\epsilon}(x)|\le 1, \quad \max_{|x| \in [0,\Gamma]} \left|f_{\text{lin},\Gamma,\epsilon}(x)-\frac{x}{2\Gamma}\right|\le \frac{|x|\epsilon}{2\Gamma}. \\\nonumber \end{align} \end{lemma} \begin{proof} Consider the domain $|x|\in[0,\Gamma]$. There, Lem.~\ref{Lem:Entire_Rect} gives the approximation error $|f_{\text{rect},2\Gamma, 2\Gamma,\epsilon}(x)-1|\le \epsilon$. Multiplying both sides by $\frac{x}{2\Gamma}$ gives the stated result. Now consider the domain $|x|\in[0,2\Gamma]$. There, $|f_{\text{rect},2\Gamma, 2\Gamma,\epsilon}(x)|\le 1$ and $|\frac{x}{2\Gamma}|\le 1$. Thus the product is bounded by $\pm1$. Now consider the domain $x\ge 2\Gamma$. Let us maximize $f_{\text{lin},\Gamma,\epsilon}(x)$ over $x,\epsilon$. Define $1/\epsilon'=\sqrt{\log{(\frac{2}{\pi\epsilon^2})}}\ge 1$. Thus $f_{\text{lin},\Gamma,\epsilon}(x)=\frac{x}{4\Gamma}\left(\text{erf}(\frac{x+2\Gamma}{\sqrt{2}\Gamma\epsilon'})+\text{erf}(\frac{2\Gamma-x}{\sqrt{2}\Gamma\epsilon'})\right)$. We make use of the upper bounds $\text{erfc}(x)=1-\text{erf}(x)\le\frac{1}{x\sqrt{\pi}}e^{-x^2}$ and $\text{erfc}(x)\le e^{-x^2}$. The first term has the bounds $1\ge \text{erf}(\frac{x+2\Gamma}{2\Gamma \epsilon'}) \ge 1-\frac{1}{\frac{x+2\Gamma}{\sqrt{2}\Gamma}\sqrt{\pi}\epsilon'}e^{-(\frac{x+2\Gamma}{\sqrt{2}\Gamma\epsilon'})^2}\ge 1-\frac{1}{\sqrt{8\pi}\epsilon'}e^{-(\frac{x+2\Gamma}{\sqrt{2}\Gamma\epsilon'})^2}$. The second term has the bounds $-1+e^{-(\frac{2\Gamma-x}{\sqrt{2}\Gamma\epsilon'})^2} \ge\text{erf}(\frac{2\Gamma-x}{\sqrt{2}\Gamma\epsilon'})\ge -1$. By adding these together and extremizing the upper and lower bounds separately, $f_{\text{lin},\Gamma,\epsilon}(x) \in [-0.0011,0.56]$ independent of $\Gamma$ and for all $\epsilon'\in[0,1]$. These bounds apply to $x\le 2\Gamma$ with a minus sign as $f_{\text{lin},\Gamma,\epsilon}(x)$ is an odd function. \end{proof} However, the required polynomial must have a non-uniform error $\left|p_{\text{lin},\Gamma,n}(x)- \frac{x}{2\Gamma}\right|\le \frac{|x|}{2\Gamma}\epsilon$, proportional to $|x|$. Though $f_{\text{lin},\Gamma,\epsilon}$ of Lem.~\ref{Lem:Entire_Linear} has that property, its Chebyshev truncation results in a worst-case uniform error $\epsilon$ for all values of $x$. This is overcome by approximating $p_{\text{lin},\Gamma,n}(x)$ as the product of a Chebyshev truncation of the entire approximation to $\text{rect}(x)$ and with $\frac{x}{2\Gamma}$. We now evaluate the scaling of the degree of the Chebyshev truncation of $f_{\text{lin},\Gamma,\epsilon}$ in Lem.~\ref{Lem:Entire_Rect} with respect to their parameters and the desired approximation error. Our starting point is the Jacobi-Anger expansion of the exponential decay function: \begin{align} \label{Eq:Jacobi-Anger} f_{\text{exp},\beta}(x)= e^{-\beta(x+1)}=e^{-\beta}\left(I_0(\beta)+2\sum^\infty_{j=1} I_j(\beta) T_j(-x)\right), \end{align} where $I_j(\beta)$ are modified Bessel functions of the first kind. The domain of this function and all the following are assumed to be $x\in[-1,1]$. By truncating this expansion above $j>n$, we obtain a degree $n$ polynomial approximation $p_{\text{exp},\beta,n}(x)$ with truncation error $\epsilon_{\text{exp},\beta,n}$: \begin{align} p_{\text{exp},\beta,n}(x)&= e^{-\beta}\left(I_0(\beta)+2\sum^n_{j=1} I_j(\beta) T_j(-x)\right),\\ \label{Eq:error_exp} \epsilon_{\text{exp},\beta,n} &= \max_{x\in[-1,1]}| p_{\text{exp},\beta,n}-f_{\text{exp},\beta}| = 2e^{-\beta}\sum^\infty_{j=n+1} |I_j(\beta)|. \end{align} Note that the equality in the rightmost term of Eq.~\ref{Eq:error_exp} arises as all the coefficients $I_j(\beta)\ge0$ when $\beta\ge 0$. Thus $\epsilon_{\text{exp},\beta,n}$ is maximized $|T_j(-x)|$ are all simultaneously maximized, which occurs at $x=-1\Rightarrow T_j(-x)=1$. By solving $\epsilon_{\text{exp},\beta,n}$, one can in principle obtain the required degree $n$ as a function of $\beta,\epsilon$. Error estimates for various degree $n$ polynomial approximations to the exponential decay function can be found in the literature. However these approximations are constructed using other methods. For instance, a Taylor expansion leads to scaling linear in $\beta$, and none explicitly bound the sum $\epsilon_{\text{exp},\beta,n}$. Fortunately, one particular error estimate in prior art is good enough and can be shown, with a little work, to implicitly bound $\epsilon_{\text{exp},\beta,n}$. We first sketch the proof of this estimate, then later show how it bounds $\epsilon_{\text{exp},\beta,n}$. \begin{lemma}[Polynomial approximation to exponential decay $e^{-\beta(x+1)}$ adapted from~\cite{Sachdeva2014Exp}] \label{Lem:Exponential_error_Sachdeva} $\forall \beta>0, \epsilon\in(0,1/2],$ there exists a polynomial $p_n$ of degree $n=\lceil\sqrt{2\lceil\max[\beta e^2,\log{(2/\epsilon)}]\rceil\log{(4/\epsilon)}}\rceil$ such that \begin{align} \max_{x \in [-1,1]}| p_n(x)-e^{-\beta(x-1)}| \le \epsilon. \end{align} \end{lemma} \begin{proof} Consider the Chebyshev expansion of the monomial $x^s=2^{1-s}\sum'^s_{j=0, s-j\;\text{even}}\binom{s}{(s-j)/2}T_j(x)=\mathbb{E}[T_{D_s}(x)]$, where $s \le 0$ is an integer and $\sum'_j$ means the $j=0$ term is halved. The representation an an expectation over the random variable $D_s=\sum^s_{j=1}Y_j$ where $Y_j=\pm 1$ with equal probabilities follows from the identity $xT_j(x)=\frac{1}{2}(T_{j-1}(x)+T_{j+1}(x))$. They show that the Chebyshev truncation of the monomial has error \begin{align} \label{Eq:Chebyshev_Monomial} p_{\text{mon},s,n}(x)&= 2^{1-s}\sideset{}{'}\sum^{\min(s,n)}_{j=0, n-j\;\text{even}}\binom{s}{(s-j)/2}T_j(x), \\ \nonumber \epsilon_{\text{mon},s,n}&=\max_{x\in[-1,1]}|p_{\text{mon},s,n}(x)-x^s| \le 2^{1-s}\sideset{}{'}\sum^{s}_{j=n+1, n-j\;\text{even}}\binom{s}{(s-j)/2}\le 2e^{-n^2/(2s)}, \end{align} which follows from the triangle inequality with $|T_j(x)|\le1$ and the Chernoff bound $P(|D_s|\ge n)\le 2 e^{-n^2/(2s)}$. By replacing each monomial up to degree $t$ in the Taylor expansion of $e^{-\beta(x-1)}=e^{-\beta}\sum^\infty_{j=0}\frac{(-\beta)^j}{j!}x^j$ with $\tilde p_{\text{mon},s,n}$, they obtain the degree $n$ polynomial $\tilde p_n(x)=e^{-\beta}\sum^t_{j=0}\frac{(-\beta)^j}{j!}\tilde p_{\text{mon},j,n}(x)$. They show the error of this approximation is split into two terms: \begin{align} \label{Eq:Chebyshev_Exponential_Sachdeva} \epsilon_{\text{sach},\beta,n}&=\max_{x\in[-1,1]}|\tilde p_n(x)-e^{-\beta(x-1)}|\le \epsilon_1 + \epsilon_2, \\\nonumber \epsilon_1 & =2e^{-\beta}\sum^t_{j=n+1}\frac{(\beta/2)^j}{j!}|p_{\text{mon},j,n}-x^j| \le 2e^{-n^2/(2t)}, \quad \epsilon_2 = 2e^{-\beta}\left|\sum^\infty_{j=t+1}\frac{(\beta/2)^j}{j!}x^j\right|\le 2e^{-\beta - t}. \end{align} By choosing $n=\lceil \sqrt{2t \log{(4/\epsilon)}} \rceil$ and $t=\lceil\max\{\beta e^2,\log{(4/\epsilon)}\}\rceil$, $\epsilon_1+\epsilon_2 \le \epsilon$. \end{proof} We now demonstrate how this upper bounds $\epsilon_{\text{exp},\beta,n}$. \begin{lemma}[Chebyshev truncation error of exponential decay $e^{-\beta(x+1)}$] \label{Lem:Exponential_error} $\forall\;\beta>0, \epsilon\in(0,1/2]$, the choice $n=\lceil\sqrt{2\lceil\max[\beta e^2,\log{(2/\epsilon)}]\rceil\log{(4/\epsilon)}}\rceil = \mathcal{O}(\sqrt{(\beta+\log{(1/\epsilon)})\log{(1/\epsilon)}})$, guarantees that $\epsilon_{\text{exp},\beta,n} \le \epsilon$. \end{lemma} \begin{proof} This result follows essentially from how the truncating the Jacobi-Anger expansion in Eq.~\ref{Eq:Jacobi-Anger} discards fewer coefficients that are all positive than the procedure of Thm.~\ref{Lem:Exponential_error_Sachdeva}. Hence the maximum truncation error occurs at $x=1$ and is monotonically increasing with the number of coefficients omitted in the truncation. Observe that the first inequality in Eq.~\ref{Eq:Chebyshev_Monomial} is actually an equality $\epsilon_{\text{mon},s,n}= 2^{1-s}\sideset{}{'}\sum^{s}_{j=n+1, n-j\;\text{even}}\binom{s}{(s-j)/2}$. This follows from the same logic as Eq.~\ref{Eq:error_exp} -- all coefficients are positive, thus the maximum error occurs at $x=1$, which simultaneously maximizes all $ T_j(x=1)=1$. Similarly, the first inequality in Eq.~\ref{Eq:Chebyshev_Exponential_Sachdeva} is also actually an equality. Let us express the truncation error of $\epsilon_{\text{sach},\beta,n}$ as a Chebyshev expansion in full \begin{align} \epsilon_{\text{sach},\beta,n}=&2e^{-\beta}\max_{x\in[-1,1]}\left|\sum^t_{j=n+1}\frac{(\beta/2)^j}{j!}\sideset{}{'}\sum^j_{k=n+1, j-k\;\text{even}}\binom{j}{(j-k)/2}T_k(x) \right. \\ \nonumber &+ \left. \sum^\infty_{j=t+1}\frac{(\beta/2)^j}{j!}\sideset{}{'}\sum^j_{k=0, j-k\;\text{even}}\binom{j}{(j-k)/2}T_k(x) \right|. \end{align} Note that we have used $(-\beta)^jT_k(-x)=\beta^jT_k(x)$ as all pairs $j-k$ are even. Thus $\epsilon_{\text{sach},\beta,n}$ is maximized at $T_k(x=1)=1$ in the sum above. This can be compared with \begin{align} \epsilon_{\text{exp},\beta,n} &= \max_{x\in[-1,1]}\left|2e^{-\beta}\sum^\infty_{j=n+1} I_j(\beta)T_j(x)\right| = \epsilon_{\text{sach},\beta,n} - 2e^{-\beta}\sum^\infty_{j=t+1}\frac{(\beta/2)^j}{j!}\sideset{}{'}\sum^n_{k=0, j-k\;\text{even}}\binom{j}{(j-k)/2} \\\nonumber & \le \epsilon_{\text{sach},\beta,n}. \end{align} More intuitively, both $\epsilon_{\text{exp},\beta,n}$ and $\epsilon_{\text{sach},\beta,n}$ sum over all coefficients $j > n$ in the Chebyshev expansion, but $\epsilon_{\text{sach},\beta,n}$ in addition sums over some positive coefficients corresponding to $j \le n$. Thus the upper bound of Lem.~\ref{Lem:Exponential_error_Sachdeva} on $\epsilon_{\text{sach},\beta,n}$ applies to $\epsilon_{\text{exp},\beta,n}$. \end{proof} In the following, we will bound all errors of our polynomial approximations in terms $\epsilon_{\text{exp},\beta,n}$, a partial sum over Bessel functions. \begin{comment} \begin{align} \label{Eq:p_exp} p_{\text{exp},\beta}(x)&= e^{-\beta(x+1)}, &n=\mathcal{O}(\sqrt{(\beta + \log{(1/\epsilon)})\log{(1/\epsilon)}}), \\ \label{Eq:p_gauss} p_{\text{gauss},\gamma}&= e^{-(\gamma x)^2}, &n=\mathcal{O}(\sqrt{(\gamma^2 + \log{(1/\epsilon)})\log{(1/\epsilon)}}), \\ \label{Eq:p_error} p_{\text{erf},k}&= \text{erf}(k x), &n=\mathcal{O}(\sqrt{(k^2 + \log{(1/\epsilon)})\log{(1/\epsilon)}}), \\ \label{Eq:p_error_shifted} p_{\text{erf},k,\delta}&= \text{erf}(k (x-\delta)), &n=\mathcal{O}(\sqrt{(k^2 + \log{(1/\epsilon)})\log{(1/\epsilon)}}), \\ \label{Eq:p_sign} p_{\text{sign},\kappa,\delta}&= \begin{cases} 1, & x \ge \delta+\kappa/2, \\ -1, & x \le \delta-\kappa/2, \\ \in[-1,1], & x \in (\delta-\kappa/2,\delta+\kappa/2), \end{cases} &n=\mathcal{O}(\frac{1}{\kappa}\log{(1/\epsilon)}), \\ \label{Eq:p_tophat} p_{\text{tophat},\kappa,w}&= \begin{cases} 1, & |x| \le w/2, \\ 0, & |x| \ge w/2+\kappa, \\ \in[-1,1], & |x| \in (w/2,w/2+\kappa). \end{cases} &n=\mathcal{O}(\frac{1}{\kappa}\log{(1/\epsilon)}), \\ \label{Eq:p_linear} p_{\text{linear},g}&= \begin{cases} g x, & |x| \le 1/(2g), \\ \in[-1,1], & \text{otherwise}. \end{cases} &n=\mathcal{O}(g\log{(1/\epsilon)}). \end{align} \end{comment} \begin{corollary}[Polynomial approximation to the Gaussian function $e^{-(\gamma x)^2}$] $\forall\gamma \ge 0, \epsilon\in(0,1/2]$ the even polynomial $ p_{\text{gauss},\gamma,n}$ of even degree $n=\mathcal{O}(\sqrt{(\gamma^2 + \log{(1/\epsilon)})\log{(1/\epsilon)}})$ satisfies \begin{align} \label{Eq:p_tilde_gauss} p_{\text{gauss},\gamma,n}(x)&= p_{\text{exp},\gamma^2/2,n/2}(2x^2-1) = e^{-\gamma^2/2}\left(I_0(\gamma^2/2)+2\sum^{n/2}_{j=1} I_j(\gamma^2/2) (-1)^{j}T_{2j}(x)\right), \\ \nonumber \epsilon_{\text{gauss},\gamma,n}&=\max_{x \in [-1,1]}| p_{\text{gauss},\gamma,n}(x)-e^{-(\gamma x)^2}| = \epsilon_{\text{exp},\gamma^2/2,n/2}\le \epsilon. \end{align} \end{corollary} \begin{proof} This follows from Eq.~\ref{Eq:Jacobi-Anger} by a simple change of variables. Let $x'=T_2(x)=2x^2-1$, $\gamma^2 = 2\beta$. Thus $e^{-\beta(x'+1)}=e^{-(\gamma x)^2}$. As $2x^2-1: [-1,1]\Rightarrow [-1,1]$ maps the domain of $e^{-(\gamma x)^2}$ to that of $f_{\text{exp},\beta}(x)$, the definition Eq.~\ref{Eq:p_tilde_gauss} results. Using the Chebyshev semigroup property $T_{j}(\pm T_{2}(x))=(\pm 1)^{j}T_{2j(x)}$, $p_{\text{gauss},k,n}$ is an even polynomial of degree $n$ and its approximation error is obtained by substitution into Eq.~\ref{Eq:error_exp}. \end{proof} A polynomial approximation to the error function follows immediately by integrating $p_{\text{gauss},\gamma,n}$. \begin{corollary}[Polynomial approximation to the error function $\text{erf}(kx)$] $\forall k > 0, \epsilon\in(0,\mathcal{O}(1)]$ the odd polynomial $p_{\text{erf},k,n}$ of odd degree $n=\mathcal{O}(\sqrt{(k^2 + \log{(1/\epsilon)})\log{(1/\epsilon)}})$ satisfies \begin{align} \label{Eq:p_tilde_erf} p_{\text{erf},k,n}(x)&= \frac{2 k e^{-k^2/2}}{\sqrt{\pi}}\left(I_0(k^2/2)x+\sum^{(n-1)/2}_{j=1} I_j(k^2/2) (-1)^{j} \left(\frac{T_{2j+1}(x)}{2j+1}-\frac{T_{2j-1}(x)}{2j-1}\right) \right), \\ \nonumber \epsilon_{\text{erf},k,n}&=\max_{x \in [-1,1]}| p_{\text{erf},k,n}(x)-\text{erf}(kx)| \le \frac{4 k}{\sqrt{\pi}n}\epsilon_{\text{gauss},k,n-1}\le\epsilon. \end{align} \end{corollary} \begin{proof} From the definition of the error function $\text{erf}(kx)=\frac{2}{\pi}\int_0^{kx}e^{-x^2}=\frac{2k}{\sqrt{\pi}}\int_0^x e^{-(kx)^2}dx$, the polynomial $ p_{\text{erf},k,n+1}(x)=k\int_0^x p_{\text{gauss},k,n}(x)dx$ follows directly from integrating Eq.~\ref{Eq:p_tilde_gauss} term-by-term using the identity $\int_0^x T_{j}(x) dx =\frac{1}{2}\left(\frac{T_{j+1}(x)}{j+1}-\frac{T_{j-1}(x)}{j-1}\right)$. The error of the remaining terms is bounded though \begin{align} \epsilon_{\text{erf},k,n} &\le \frac{2 k e^{-k^2/2}}{\sqrt{\pi}}\left|\sum^{\infty}_{j=(n+1)/2} I_j(k^2/2) (-1)^{j} \left(\frac{T_{2j+1}(x)}{2j+1}-\frac{T_{jn-1}(x)}{2j-1} \right)\right| \\\nonumber &\le \frac{2 k e^{-k^2/2}}{\sqrt{\pi}}\sum^{\infty}_{j=(n+1)/2} |I_j(k^2/2)| \left(\frac{1}{2j+1}+\frac{1}{2j-1}\right) \\\nonumber &\le \frac{4 k e^{-k^2/2}}{\sqrt{\pi}n}\sum^{\infty}_{j=(n+1)/2} |I_j(k^2/2)| =\frac{4 k}{\sqrt{\pi}n}\epsilon_{\text{gauss},k,n-1}. \end{align} The error of $\epsilon_{\text{erf},k,n}\le \frac{4 k}{\sqrt{\pi}n}\epsilon_{\text{exp},k^2/2,(n-1)/2}$. However, $n=\Omega(k \log^{1/2}{(1/\epsilon)})$. Thus $\frac{k}{n}=\mathcal{O}(\log^{-1/2}{(1/\epsilon)})=\mathcal{O}(1)$ and does not make the scaling any worse. \end{proof} A polynomial approximation to the shifted error function follows by a change of variables. \begin{corollary}[Polynomial approximation to the shifted error function $\text{erf}(k(x-\delta))$] \label{Cor:Shifted_Sgn} $\forall k > 0, \delta \in[-1,1], \epsilon\in(0,\mathcal{O}(1)]$ the polynomial $p_{\text{erf},k,\delta,n}(x)= p_{\text{erf},2k,n}((x-\delta)/2)$ of odd degree $n=\mathcal{O}(\sqrt{(k^2 + \log{(1/\epsilon)})\log{(1/\epsilon)}}$ satisfies \begin{align} \label{Eq:p_tilde_erf_shifted} \epsilon_{\text{erf},k,\delta,n}&=\max_{x \in [-1,1]}|p_{\text{erf},k,\delta,n}(x)-\text{erf}(k(x-\delta))| \le \epsilon_{\text{erf},2k,n}\le\epsilon. \end{align} \end{corollary} \begin{proof} This follows trivially from $\text{erf}(k(x-\delta))=\text{erf}(2k\frac{x-\delta}{2})$. Note that we have doubled the degree of our polynomials in order to double the width of the domain, which we exploit to allows translations. \end{proof} This polynomial approximation of the shifted error function is the basic ingredient we use to construct more complicated functions $\text{sgn}$ and $\text{rect}$ through Lems.\ref{Lem:Entire_Sgn},\ref{Lem:Entire_Rect}. \begin{corollary}[Polynomial approximation to the sign function $\text{sgn}(x-\delta)$] $\forall\;\kappa > 0, \delta \in[-1,1], \epsilon\in(0,\mathcal{O}(1)]$ the polynomial $p_{\text{sgn},\kappa,\delta,n}(x)=p_{\text{erf},k,\delta,n}(x)$ of odd degree $n=\mathcal{O}(\frac{1}{\kappa}\log{(1/\epsilon)})$, where $k = \frac{\sqrt{2}}{\kappa}\log^{1/2}{(\frac{2}{\pi\epsilon_1^2})}$, satisfies \begin{align} \label{Eq:p_tilde_sgn_shifted} \epsilon_{\text{sgn},\kappa,\delta,n}&=\max_{x \in [-1,\delta-\kappa/2]\cup [\delta+\kappa/2,1]}|p_{\text{erf},k,\delta,n}(x)-\text{sgn}(x-\delta)| \le \epsilon_{\text{erf},k,\delta,n}+\epsilon_1 \le 2\epsilon_{\text{erf},k,\delta,n}\le \epsilon. \end{align} \end{corollary} \begin{proof} The equation for $k$ comes from Lem.~\ref{Lem:Entire_Sgn}. We then choose $\epsilon_1=\epsilon_{\text{erf},k,\delta,n}$ which defines an implicit equation for $\epsilon_1$ and doubles the error. \end{proof} \begin{corollary}[Polynomial approximation to the rectangular function $\text{rect}(x/w)$] $\forall\;\kappa \in (0,2], w \in [0,2-\kappa], \epsilon\in(0,\mathcal{O}(1)]$, the even polynomial $p_{\text{rect},w,\kappa,n}(x)=\frac{1}{2}\left( p_{\text{sgn},\kappa,(w+\kappa)/2,n+1}(x)+ p_{\text{sgn},\kappa,(w+\kappa)/2,n+1}(-x)\right)$ of even degree $n\mathcal{O}(\frac{1}{\kappa}\log{(1/\epsilon)})$ satisfies \begin{align} \label{Eq:p_tilde_rect} \epsilon_{\text{rect},w,\kappa,n}& =\max_{|x| \in [0,w/2]\cup[w/2+\kappa,1]} |p_{\text{rect},w,\kappa,n}(x)-\text{rect}(x/w)| \le \epsilon_{\text{sgn},\kappa,\delta,n}\le \epsilon. \end{align} \end{corollary} \begin{proof} This follows from the construction of a rectangular function with two sign functions in Lem.~\ref{Lem:Entire_Rect}. \end{proof} \begin{corollary}[Polynomial approximation to the truncated linear function $f_{\text{lin},\Gamma}(x)$] \label{Lem:Polynomial_Truncated_Linear} $\forall\;\Gamma \in ( 0,1/2],\epsilon\in(0,\mathcal{O}(\Gamma)]$, the odd polynomial $p_{\text{lin},\Gamma,n}(x)=\frac{x}{2\Gamma}p_{\text{rect},2\Gamma,2\Gamma,n-1}(x)$ of odd $n=\mathcal{O}(\frac{1}{\Gamma}\log{(1/\epsilon)})$ satisfies \begin{align} \label{Eq:p_tilde_trunc_linear} \epsilon_{\text{lin},\Gamma,n}& =\max_{|x| \in [0,\Gamma]} \frac{2\Gamma}{|x|}\left|p_{\text{lin},\Gamma,n}(x)-\frac{x}{2\Gamma}\right| \le \epsilon_{\text{rect},2\Gamma,2\Gamma,n-1}\le \epsilon. \end{align} \end{corollary} \begin{proof} This follows from multiplying a rectangular function with a linear function in Lem.~\ref{Lem:Entire_Linear}. One subtlety arises here: The error of $p_{\text{lin},\Gamma,n}$ is bounded by $\epsilon_{\text{rect},2\Gamma,2\Gamma,n-1}$ in the domain $|x|\in[3\Gamma,1]$. Thus multiplying by $\frac{x}{2\Gamma}$ increases this error to at most $\frac{\epsilon_{\text{rect},2\Gamma,2\Gamma,n-1}}{2\Gamma}$. However, the quantum signal processing conditions in Thm.~\ref{Lem:AchievableD} require all polynomials to be bounded by $1$. This implicitly constrains us to choose $n$ such that $\epsilon_{\text{rect},2\Gamma,2\Gamma,n-1}\le 2 \Gamma$ is also satisfied. \end{proof} In all the above cases, the entire functions that are being approximated are bounded by $1$. When the approximation error is $\epsilon$, the resulting polynomial is then bounded by $1+\epsilon$. In such an event, we simply rescale these polynomials by a factor $\frac{1}{1+\epsilon}$. At worst, this only doubles the error of the approximation. We also emphasize that our proposed sequence of polynomial transformations serve primarily to prove their asymptotic scaling. In practice, close-to-optimal constant factors in the degree of these polynomials can be obtained by a direct Chebyshev truncation of the entire functions. \begin{comment} \section{Polynomials for Gibbs sampling} When $\beta=\mathcal{O}(1)$, we may simply use the polynomial $\tilde p_{\text{exp},\beta,n}$. In the case of Gibbs sampling, we require a polynomial approximation to $e^{-\beta x}$. However, when $E_{min}$ is known, a speedup by factor $e^{-\beta (x-\Gamma)}$ can be achieved by finding a polynomial approximation to \begin{align} p_{\text{Gibbs},\beta}= \begin{cases} e^{-\beta x}, & x \ge 0 \\ \in [-1,1], & \text{otherwise}. \end{cases} \end{align} This function may be approximated by the product \begin{align} p_{\text{Gibbs},\beta}=e^{-\beta x}\frac{1+\text{erf}(k x)}{2}. \end{align} For some choice of $k$. As $\forall x\ge 0, |\frac{1+\text{erf}(k x)}{2}|\le 1$ is close to $1$ and $\forall x\le 0, |\frac{1+\text{erf}(k x)}{2}|\le e^{-(kx)^2}$ is close to $0$, the product $|p_{\text{Gibbs},\beta}|\le 1$. From the upper bound on the complementary error function, $1-\text{erf}(k x)\le \epsilon$ for any $x\ge \kappa >0, k\ge \frac{\sqrt{2}}{\kappa}\log^{1/2}{(\frac{2}{\pi\epsilon^2})}$. By choosing $\kappa = \beta^{-1}$, \begin{align} e^{-\beta x}\frac{1+\text{erf}(k x)}{2}\approx \begin{cases} e^{-\beta x}, & x \ge \beta^{-1} \\ \in [-1,1], & \text{otherwise}. \end{cases} \end{align} Thus this is a good candidate for polynomial approximation. We evaluate the scaling of the Chebyshev truncation of this function using more powerful techniques. We apply this to evaluate the truncation error of $p_{\text{Gibbs},\beta}$ \begin{theorem}[Polynomial approximation to Gibbs function] $\forall\;\beta =\Omega(1), \epsilon =\mathcal{O}(1)$, the polynomial $\tilde p_{\text{Gibbs},\beta,n}$ of degree $n=\mathcal{O}\left(\beta \log{(1/\epsilon)}\log{(\beta/\epsilon)}\right)$ obtained by the Chebyshev truncation of $e^{-\beta x}\frac{1+\text{erf}(\beta x \log{(1/\epsilon)})}{2}$ satisfies \begin{align} \epsilon_{\text{Gibbs},\beta,n}=\max_{x \in [\beta^{-1},1]}|\tilde p_{\text{Gibbs},\beta,n}(x)-e^{-\beta x}| \le \epsilon. \end{align} \end{theorem} \begin{proof} We first need to prove some simple upper bounds on the complex error function. From the definition $\text{erf}(i y)=\frac{2}{\sqrt{\pi}}\int_0^{y} e^{x^2}d x\le \frac{2 |y|}{\sqrt{\pi}}e^{y^2}$. This can be improved by splitting the integral into two parts: $ \text{erf}(i y)\le \frac{2}{\sqrt{\pi}}\int_0^{y} e^{x^2}d x = \frac{2}{\sqrt{\pi}}\left(\int_0^{y/2} e^{x^2}d x+\int_{y/2}^{y} e^{x^2}d x\right) \le \frac{2}{\sqrt{\pi}}\left(\int_0^{y/2} e^{x^2}d x+\int_{y/2}^{y} \frac{2x}{y} e^{x^2}d x\right) \le \frac{y}{\sqrt{\pi}}e^{(y/2)^2}+ \frac{2}{\sqrt{\pi}}\frac{e^{y^2}-e^{(y/2)^2}}{y} = \frac{e^{y^2}}{\sqrt{\pi}}\left(\frac{2+e^{-3y^2/4}(y^2-2)}{y}\right)\le e^{y^2} $, where we have used the facts $\forall x \ge y/2, \frac{2x}{y}\ge 1$ and $\forall y\ge 0, \frac{1}{\sqrt{\pi}y}\left(2+e^{-3y^2/4}(y^2-2)\right)\le 1$. We also require the upper bound on $|\text{erf}(r e^{i\phi})| $ for $\phi \in (\pi/4,3\pi/4)$, $r > 0$. One upper bound is \begin{align} \label{Eq:Erfc_UpperBound} |\text{erf}(r e^{i\phi})| &= \frac{2}{\sqrt{\pi}}\left|\int_0^r e^{- r^2 e^{2i\phi}} dr\right| = \frac{2}{\sqrt{\pi}}\left|\int_0^r e^{- r^2 \cos{(2\phi)}}e^{- i r^2 \sin{(2\phi)}} dr\right| \\ \nonumber & \le \frac{2}{\sqrt{\pi}}\left|\int_0^r e^{r^2 \cos{(2\phi-\pi)}} dr\right| = \frac{2}{\sqrt{\pi\cos{(2\phi-\pi)}}}\left|\int_0^{r\sqrt{\cos{(2\phi-\pi)}}} e^{r^2} dr\right| \\\nonumber & \le \frac{e^{r^2 \cos{(2\phi-\pi)}}}{\sqrt{\cos{(2\phi-\pi)}}}= \frac{e^{\text{Re}(-z^2)}}{\sqrt{\cos{(2\phi-\pi)}}}. \end{align} | Let us now evaluate $M=\max_{z\in E_\rho}\left|e^{-\beta x}\frac{1+\text{erf}(k x)}{2}\right|$. Note that the largest value of $\text{erf}{(k z)}\sim e^{(k \text{Im}(z))^2}$ occurs at along the imaginary axis, whereas the largest value of $e^{-\beta z}$ occurs along the negative real axis. However, so long as $k^2 \gg \beta$, we can expect the contribution of $\text{erf}(k x)$ to dominate, thus the maximizing value of $z$ will be close to the imaginary axis. As we always choose $k=\Theta(\beta \log^{1/2}{(1/\epsilon)})$, this is indeed the case, and the upper bound of Eq.~\ref{Eq:Erfc_UpperBound} is applicable. \begin{align} M &= \max_{z\in E_\rho}\left|e^{-\beta x}\frac{1+\text{erf}(k x)}{2}\right| \le \max_{z\in E_\rho}\left|e^{-\beta z}\right|\frac{1+\left|\int^{k z}_0 e^{-x^2} dx\right|}{2} \\ \nonumber &\le \max_{z\in E_\rho}\frac{e^{-\beta \text{Re}(z)}e^{-k^2\text{Re}(z^2)}}{\sqrt{\cos{(2\phi-\pi)}}} \le \max_{z\in E_\rho}\frac{e^{\gamma}}{\sqrt{\cos{(2\phi-\pi)}}}, \\ \nonumber \gamma &=-\beta \left(\frac{1+\rho^2}{2\rho}\cos{(\theta)}\right)-k^2\left(\frac{1}{2}+\frac{\cos{(2\theta)}}{4}\left(\rho^2+\rho^{-2}\right)\right). \end{align} Note that in the third line, we use fact that the upper bound Eq.~\ref{Eq:Erfc_UpperBound} on $\left|\int^{k z}_0 e^{-x^2} dx\right|$ is also greater than $1$, and in the fifth line, we substitute $z = \frac{\rho e^{i\theta}+\rho^{-1}e^{-i\theta}}{2}$. The exponent $\gamma$ is maximized when $-\cos{(\theta)}=\frac{\beta \rho}{2k^2}\frac{1+\rho^2}{1+\rho^4} \le \frac{1}{2\beta \log{(1/\epsilon)}}, $ which is consistent with the ansatz that the maximizing value of $z$ lies close to the imaginary axis. Thus for sufficiently large $\beta\log{(1/\epsilon)} =\Omega(1)$, we may bound $\sqrt{\cos{(2\phi-\pi)}}=\Omega(1)$. As $e^{-\beta z}\text{erf}{(k z)}$ is an entire function with no singularities, we are free to choose any $\rho > 1$. We make the following choice: \begin{align} \rho= e^{1/\beta} \Rightarrow \gamma = \mathcal{O}\left(\log{(1/\epsilon)}\right),\quad M=\mathcal{O}(1/\epsilon). \end{align} Substituting this into Thm.~\ref{Thm:ChebyshevTruncation}, we obtain the scaling of the Chebyshev truncation of $e^{-\beta z}\frac{1+\text{erf}{(k z)}}{2}$: \begin{align} \epsilon &\le \frac{2M \rho^{-n}}{\rho-1} = \mathcal{O}\left(\epsilon^{-1}\beta e^{-n/\beta}\right),\\ \nonumber n&=\mathcal{O}\left(\beta\log{(\beta/\epsilon)}\right) =\tilde{\mathcal{O}}(\beta\log{(1/\epsilon)}). \end{align} As differs from the results of Thm.~\ref{Thm:Error_scalings} by a $\log(\beta)$ factor, we conjecture that $n=\mathcal{O}(\beta\log{(1/\epsilon)})$ is optimal. \end{proof} Using the same trick as Cor.~\ref{Cor:Shifted_Sgn} for obtaining the shifted $\text{sign}$ function, we may obtain the shifted Gibbs function. \begin{theorem}[Polynomial approximation to the shifted Gibbs function] $\forall\;\beta =\Omega(1), \delta\in[-1,1-\beta^{-1}], \epsilon =\mathcal{O}(1)$, the polynomial $\tilde p_{\text{Gibbs},\beta,\delta,n}$, of degree $n=\mathcal{O}\left(\beta \log{(1/\epsilon)}\log{(\beta/\epsilon)}\right)$ satisfies \begin{align} \tilde p_{\text{Gibbs},\beta,\delta,n}(x)& = \tilde p_{\text{Gibbs},2\beta,\delta,n}((x-\delta-\beta^{-1})/2) \\ \nonumber \epsilon_{\text{Gibbs},\beta,\delta,n}&=\max_{x \in [\delta,1]}|\tilde p_{\text{Gibbs},\beta,n}(x)-e^{-\beta (x-\delta)}| \le \epsilon. \end{align} \end{theorem} \end{comment} \section{Polynomials for Low-Energy Uniform Spectral Amplification} \label{Sec:Polynomials_Low_energy} The proof of Thm.~\ref{Thm:Ham_Encoding_Uniform_Amplification} requires a polynomial approximation $p_{\text{gap},\Delta,n}(x)$ Lem.~\ref{Lem.Polynomial_gapped_linear} to the truncated linear function \begin{align} \label{Eq:Linear_target_function} f_{\text{gap},\Delta}(x)= \begin{cases} \frac{x+1-\Delta}{\Delta}, & x \in [-1, -1+\Delta], \\ \in[-1,1], & \text{otherwise}. \end{cases} \end{align} Our strategy is to construct an entire function $f_{\text{gap},\Delta,\epsilon}$ that approximates $f_{\text{gap},\Delta}$ with error $\epsilon$ over the domain of interest. Entire functions are desirable as they are analytic on the entire complex plane. This implies that truncating their expansion $f_{\text{gap},\Delta,\epsilon}(x)=\sum_{j=0}^\infty a_j T_j(x)$ in the Chebyshev basis produces polynomials with a uniform approximation error that scales almost optimally with the degree $n$~\cite{Trefethen2013approximation}. We build $f_{\text{gap},\Delta,\epsilon}$ by using the entire approximation to the sign function $\text{sgn}(x)$ in Lem.~\ref{Lem:Entire_Sgn} of Appendix~\ref{Sec:Polynomials_Amplitude_Multiplication} and some intermediate results on the error function $\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x e^{-y^2}dy$. \begin{lemma}[Entire approximation to the gapped linear function $f_{\text{gap},\Delta}(x)$] \label{Lem.Entire_gapped_lin} $\forall\;\Delta\in[0,1/2], x\in[-1,\infty], \epsilon\in(0,\sqrt{\frac{1}{2e\pi}}]$. Then the function $f_{\text{gap},\Delta,\epsilon}(x)$ satisfies \begin{align} f_{\text{gap},\Delta,\epsilon}(x)&=\frac{x+1-\Delta}{\Delta}\frac{1-f_{\text{sgn},\Delta,2\epsilon}(x+1-3\Delta/2)}{2}, \\\nonumber \epsilon &\ge \max_{ x \in [-1,-1+\Delta]} \left|f_{\text{gap},\Delta,\epsilon}(x)-\frac{x+1-\Delta}{\Delta}\right|, \\\nonumber 0&\le \max_{ x \in [-1+\Delta,\infty]}f_{\text{gap},\Delta,\epsilon}(x)\le 1, \\\nonumber \epsilon/10 &\ge \max_{ x \in [1-\Delta,1]} |f_{\text{gap},\Delta,\epsilon}(x)|. \end{align} \end{lemma} \begin{proof} Let us derive bounds on the following regions:\\ $x\in[-1,-1+\Delta]$: From Lem.~\ref{Lem:Entire_Sgn}, $|\frac{1-f_{\text{sgn},\Delta,2\epsilon}(x+1-3\Delta/2))}{2}-1|\le \epsilon$ approximates the function $1$ with error $\epsilon$. By multiplying both sides with $\frac{x+1-\Delta}{\Delta}$, $|f_{\text{gap},\Delta,\epsilon}(x)-\frac{x+1-\Delta}{\Delta}|\le \frac{x+1-\Delta}{\Delta}\epsilon \le \epsilon$. \\ $x\in[-1+\Delta,-1+3\Delta/2]$: From Lem.~\ref{Lem:Entire_Sgn} $|\frac{1-f_{\text{sgn},\Delta,2\epsilon}}{2}|\in [0,1/2]$. In this region, $\frac{x+1-\Delta}{\Delta}\in[0,1/2]$. Thus by multiplying, $f_{\text{gap},\Delta,\epsilon}(x)\in[0,1/2]$. \\ $x\in [-1+3\Delta/2,1-\Delta]$: From the upper bound $\text{erfc}(x)\le e^{-x^2}$, $f_{\text{gap},\Delta,\epsilon}(x)\le \frac{x+1-\Delta}{2\Delta}e^{-k^2(x+1-3\Delta/2)^2}$, where $k=\frac{\sqrt{2}}{\Delta}\log^{1/2}{(\frac{1}{2\pi\epsilon^2})}$. The worst case occurs when $k$ is smallest hence $\epsilon=\sqrt{\frac{1}{2e\pi}}$ is largest. Thus the upper bound is maximized with value $\frac{1+\sqrt{5}}{4}e^{(\sqrt{5}-3)/4}\le 0.7$ at $x=-1+\frac{1}{4}(5+\sqrt{5})\Delta < -1+2\Delta$. \\ $x\in [1-\Delta,\infty)$: The upper bound obtained for $x\in[-1+3\Delta/2,1-\Delta]$ still applies here and is monotonically decreasing with $x$. Thus it is maximized when $\Delta=1/2$ is largest and at $x=1-\Delta$. With this upper bound, $f_{\text{gap},1/2,\epsilon}(1/2)\le 2e^{-9k^2/16}< 32\sqrt{\frac{2\pi^9}{\epsilon^9}}< \frac{ \epsilon}{10}$ by substituting $k$ and then using the fact $\epsilon \le \sqrt{\frac{1}{2e\pi}}$. \\ $x\in [-1+\Delta,\infty]$: $\frac{x+1-\Delta}{\Delta}$ and $\frac{1-f_{\text{sgn},\Delta,2\epsilon}(x+1-3\Delta/2)}{2}$ are both positive, thus $f_{\text{gap},\Delta,\epsilon}(x)$ is positive. \end{proof} We now construct a degree $n$ polynomial approximation to $f_{\text{gap},\Delta}(x)$. \begin{lemma}[Polynomial approximation to the gapped linear function $f_{\text{gap},\Delta}(x)$] \label{Lem.Polynomial_gapped_linear} $\forall\;\epsilon \le \mathcal{O}(1)$, there exists an odd polynomial $p_{\text{gap},\Delta,n}$ of degree $n=\mathcal{O}(\Delta^{-1/2}\log^{3/2}{(1/(\Delta\epsilon))})$ such that \begin{align} \max_{ x \in [-1,-1+\Delta]} \left|p_{\text{gap},\Delta,n}(x)-\frac{x+1-\Delta}{\Delta}\right|\le \epsilon \quad \text{and}\quad \max_{ x \in [-1,1]} \left|p_{\text{gap},\Delta,n}(x)\right|\le 1. \end{align} \end{lemma} \begin{proof} Let us expand $f_{\text{gap},\Delta,\epsilon_1}(x)=\sum_{j=0}a_j T_j(x)$ in the Chebyshev basis. Then the truncation error of $p_{n}(x)=\sum^n_{j=0}a_j T_j(x)$ has a well-known upper bound from Thm.~8.2 of~\cite{Trefethen2013approximation}: \begin{align} \label{Eq:Polynomial_Gapped_linear_error} \max_{x\in[-1,1]}|p_{n}(x)-f_{\text{gap},\Delta,\epsilon_1}(x)|\le \epsilon_2 = \frac{2M \rho^{-n}}{\rho-1}, \quad M = \max_{z\in E_\rho}|f_{\text{gap},\Delta,\epsilon_1}(z)|, \end{align} for any elliptical radius $\rho>1$, where $E_\rho=\{z:z=\frac{1}{2}(e^{i\theta}+\rho^{-1} e^{-i\theta}), \theta\in[0,2\pi)\}$ is the Bernstein ellipse. We will need an upper bound on $|\text{erf}(r e^{i\phi})|$ for $r\ge 0, \phi\in[0,2\pi)$: \begin{align} \label{Eq:Erfc_UpperBound} |\text{erf}(r e^{i\phi})| &= \frac{2}{\sqrt{\pi}}\left|\int_0^r e^{- r^2 e^{2i\phi}} dr\right| = \frac{2}{\sqrt{\pi}}\left|\int_0^r e^{- r^2 \cos{(2\phi)}}e^{- i r^2 \sin{(2\phi)}} dr\right| \\ \nonumber & \le \frac{2}{\sqrt{\pi}}\left|\int_0^r e^{-r^2 \cos{(2\phi)}} dr\right| = \frac{2r}{\sqrt{\pi}}\max\{1,e^{-r^2 \cos{(2\phi)}}\} = \frac{2r}{\sqrt{\pi}}\max\{1,e^{\text{Re}(-(re^{i\phi})^2)}\}. \end{align} We also need the upper bounds $|z|^2=\frac{1}{4}\left(\rho^{2}+\rho^2{2}+2\cos{(2\theta)}\right)\le \rho^2$. Let $k=\frac{\sqrt{2}}{\Delta}\log^{1/2}{(\frac{1}{2\pi\epsilon_1^2})}$, $|k(z+1-3\Delta/2)|\le k(|z|+1+3\Delta/2)\le k(\rho+1+3\Delta/2)$. Then \begin{align} M &= \max_{z\in E_\rho}\left|\frac{z+1-\Delta}{\Delta}\frac{1-\text{erf}{(k(z+1-3\Delta/2)}}{2}\right| \le \max_{z\in E_\rho}\frac{|z|+1+\Delta}{2\Delta}\left(1+|\text{erf}{(k(z+1-3\Delta/2)}|\right) \\\nonumber &\le \mathcal{O}(\text{poly}(\rho,\Delta^{-1}))\max_{z\in E_\rho}\left(1+\frac{2|k(z+1-3\Delta/2)|}{\sqrt{\pi}}(1+e^{\text{Re}(-(k(z+1-3\Delta/2)^2)})\right) \\\nonumber &\le \mathcal{O}(\text{poly}(\rho,\Delta^{-1}))\max_{z\in E_\rho}e^{\text{Re}(-(k(z+1-3\Delta/2)^2)}. \end{align} By taking derivatives with respect to $\theta$, the maximum value of the exponent $\alpha=\max_{\theta\in[0,2\pi)}\text{Re}(-(k(z+1-3\Delta/2)^2) = \frac{k^2(\rho^2-1)(2-(2-3\Delta)^2\rho^2+2\rho^4)}{8\rho^2(1+\rho^4)}$. Let us choose $\rho = e^{a}$, where $a=\mathcal{O}(1/\sqrt{k^2\Delta})$. Then $\alpha = \mathcal{O}(1)$. Substituting the value of $k$, we have $a=\mathcal{O}(\sqrt{\Delta/\log{(1/\epsilon_1)}})$, and $M = \mathcal{O}\left(\text{poly}(\Delta^{-1})\right)$. Thus from Eq.~\ref{Eq:Polynomial_Gapped_linear_error}, \begin{align} \epsilon_2=\mathcal{O}\left(\text{poly}(\Delta^{-1})e^{-n \sqrt{\Delta/\log{(1/\epsilon_1)}}}\right) \Rightarrow n = \mathcal{O}\left(\Delta^{-1/2}\log^{3/2}{\left(\frac{1}{\max\{\Delta\epsilon_1,\epsilon_2\}}\right)}\right), \end{align} where the last equation applies $\log{(\text{poly}(\Delta^{-1})/\epsilon)}=\mathcal{O}(\log(\frac{1}{\Delta\epsilon}))$. Thus the total approximation error is $\max_{x\in[-1-1+\Delta]}|p_{n}(x)-\frac{x+1-\Delta}{\Delta}|\le \epsilon_1+\epsilon_2$. Let $p_{\text{gap,sym},\Delta,n}(x)=\frac{1}{2}(p_{n}(x)-p_{n}(-x))$ be the odd component of $p_{n}(x)$. Using the bounds of Lem.~\ref{Lem.Entire_gapped_lin}, this increases the error in $x\in[-1,-1+\Delta]$ to at most $\frac{11}{10}(\epsilon_1+\epsilon_2)$. By subtracting these bounds, we also have $\max_{x\in[-1,1]}|p_{\text{gap,sym},\Delta,n}(x)|\le 1+\frac{11}{10}(\epsilon_1+\epsilon_2)$. Thus we rescale this to obtain $p_{\text{gap},\Delta,n}(x)=\frac{p_{\text{gap,sym},\Delta,n}(x)}{1+\frac{11}{10}(\epsilon_1+\epsilon_2)}$. Using $\max_{x\in[0,\infty]}|\frac{1}{1+x}-1|\le x$, This increases the error by at most a constant factor $\max_{x\in[-1-1+\Delta]}|p_{\text{gap},\Delta,n}(x)-\frac{x+1-\Delta}{\Delta}|=\mathcal{O}(\epsilon_1+\epsilon_2)$, so choose $\epsilon_1=\epsilon_2=\mathcal{O}(\epsilon)$. \end{proof} \end{document}
arXiv
C-BASS Experimental radio cosmology MID-Radio Telescope, single pixel feed packages for the square kilometre array: an overview IEEE Journal of Microwaves Institute of Electrical and Electronics Engineers 1:1 (2021) 428-437 Angela Taylor, Michael Jones, Jamie Leech, andre Hector, Lei Liu, Robert Watkins, A Pellegrini The Square Kilometre Array (SKA) project is an international effort to build the world's largest radio telescope, enabling science with unprecedented detail and survey speed. The project spans over a decade and is now at a mature stage, ready to enter the construction and integration phase. In the fully deployed state, the MID-Telescope consists of a 150-km diameter array of offset Gregorian antennas installed in the radio quiet zone of the Karoo desert (South Africa). Each antenna is equipped with three feed packages, that are precision positioned in the sub-reflector focus by a feed indexer platform. The total observational bandwidth (0.35-15.4GHz) is segmented into seven bands. Band 1 (0.35 – 1.05 GHz) and Band 2 (0.95 – 1.76 GHz) are implemented as individual feed packages. The remaining five bands (Bands 3, 4, 5a, 5b, and 6) are combined in a single feed package. Initially only Band 5a (4.6 – 8.5 GHz) and Band 5b (8.3 – 15.4 GHz) will be installed. This paper provides an overview of recent progress on design, test and integration of each feed package as well as project and science goals, timeline and path to construction. More details from the publisher Details from ORA IEEE Journal of Microwaves Institute of Electrical and Electronics Engineers (2021) Alice Pellegrini, Jonas Flygare, Isak P Theron, Robert Lehmensiek, Adriaan Peens-Hough, Jamie Leech, Michael E Jones, Angela C Taylor, Robert EJ Watkins, Lei Liu, Andre Hector, Biao Du, Yang Wu The Square Kilometre Array (SKA) project is an international effort to build the world s largest radio telescope, enabling science with unprecedented detail and survey speed. The project spans over a decade and is now at a mature stage, ready to enter the construction and integration phase. In the fully deployed state, the MID-Telescope consists of a 150-km diameter array of offset Gregorian antennas installed in the radio quiet zone of the Karoo desert (South Africa). Each antenna is equipped with three feed packages, that are precision positioned in the sub-reflector focus by a feed indexer platform. The total observational bandwidth (0.35-15.4GHz) is segmented into seven bands. Band 1 (0.35-1.05GHz) and Band 2 (0.95-1.76GHz) are implemented as individual feed packages. The remaining five bands (Bands 3, 4, 5a, 5b, and 6) are combined in a single feed package. Initially only Band 5a (4.6-8.5GHz) and Band 5b (8.3-15.4GHz) will be installed. This paper provides an overview of recent progress on design, test and integration of each feed package as well as project and science goals, timeline and path to construction. Details from ArXiV Characterizing the performance of high-speed data converters for RFSoC-based radio astronomy receivers Monthly Notices of the Royal Astronomical Society Oxford University Press 501:4 (2020) 5096-5104 Chao Liu, Michael Jones, Angela Taylor RF system-on-chip (RFSoC) devices provide the potential for implementing a complete radio astronomy receiver on a single board, but performance of the integrated analogue-to-digital converters (ADCs) is critical. We have evaluated the performance of the data converters in the Xilinx ZU28DR RFSoC, which are 12-bit, 8-fold interleaved converters with a maximum sample speed of 4.096 Giga-sample per second (GSPS). We measured the spurious-free dynamic range (SFDR), signal-to-noise and distortion (SINAD), effective number of bits (ENOB), intermodulation distortion (IMD), and cross-talk between adjacent channels over the bandwidth of 2.048 GHz. We both captured data for off-line analysis with floating-point arithmetic, and implemented a real-time integer arithmetic spectrometer on the RFSoC. The performance of the ADCs is sufficient for radio astronomy applications and close to the vendor specifications in most of the scenarios. We have carried out spectral integrations of up to 100 s and stability tests over tens of hours and find thermal noise-limited performance over these time-scales. Resolved observations at 31 GHz of spinning dust emissivity variations in rho Oph MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY 495:3 (2020) 3482-3493 Carla Arce-Tord, Matias Vidal, Simon Casassus, Miguel Carcamo, Clive Dickinson, Brandon S Hensley, Ricardo Genova-Santos, J Richard Bond, Michael E Jones, Anthony CS Readhead, Angela C Taylor, J Anton Zensus © 2020 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society. The ρ Oph molecular cloud is one of the best examples of spinning dust emission, first detected by the cosmic background imager (CBI). Here, we present 4.5 arcmin observations with CBI 2 that confirm 31 GHz emission from ρ Oph W, the PDR exposed to B-Type star HD 147889, and highlight the absence of signal from S1, the brightest IR nebula in the complex. In order to quantify an association with dust-related emission mechanisms, we calculated correlations at different angular resolutions between the 31 GHz map and proxies for the column density of IR emitters, dust radiance, and optical depth templates. We found that the 31 GHz emission correlates best with the PAH column density tracers, while the correlation with the dust radiance improves when considering emission that is more extended (from the shorter baselines), suggesting that the angular resolution of the observations affects the correlation results. A proxy for the spinning dust emissivity reveals large variations within the complex, with a dynamic range of 25 at 3σ and a variation by a factor of at least 23, at 3σ, between the peak in ρ Oph W and the location of S1, which means that environmental factors are responsible for boosting spinning dust emissivities locally. The C-Band All-Sky Survey (C-BASS): total intensity point source detection over the northern sky Monthly Notices of the Royal Astronomical Society Oxford University Press (2020) staa1572 Rdp Grumitt, Angela Taylor, Luke Jew, Michael E Jones, C Dickinson, A Barr, R Cepeda-Arroita, Hc Chiang, Se Harper, Hm Heilgendorff, JL Jonas, JP Leahy, Jamie Leech, TJ Pearson, MW Peel, ACS Readhead, J Sievers We present a point source detection algorithm that employs the second order Spherical Mexican Hat Wavelet filter (SMHW2), and use it on C-BASS northern intensity data to produce a catalogue of point sources. The SMHW2 allows us to filter the entire sky at once, avoiding complications from edge effects arising when filtering small sky patches. The algorithm is validated against a set of Monte Carlo simulations, consisting of diffuse emission, instrumental noise, and various point source populations. The simulated source populations are successfully recovered. The SMHW2 detection algorithm is used to produce a $4.76\,\mathrm{GHz}$ northern sky source catalogue in total intensity, containing 1729 sources and covering declinations $\delta\geq-10^{\circ}$. The C-BASS catalogue is matched with the GB6 and PMN catalogues over their common declinations. From this we estimate the $90\%$ completeness level to be approximately $630\,\mathrm{mJy}$, with a corresponding reliability of $95\%$, when applying a Galactic mask covering $20\%$ of the sky. We find the C-BASS and GB6/PMN flux density scales to be consistent with one another to within $3\%$. The absolute positional offsets of C-BASS sources from matched GB6/PMN sources peak at approximately $3.5\,\mathrm{arcmin}$.
CommonCrawl
Decay estimation for positive solutions of a $\gamma$-Laplace equation May 2011, 30(2): 559-571. doi: 10.3934/dcds.2011.30.559 Regularity of optimal transport and cut locus: From nonsmooth analysis to geometry to smooth analysis Cédric Villani 1, Institut Henri Poincaré & Université Claude Bernard Lyon 1, 11 rue Pierre et Marie Curie 75230 Paris Cedex 05 Received August 2010 Published February 2011 In this survey paper I describe the convoluted links between the regularity theory of optimal transport and the geometry of cut locus. Keywords: fully nonlinear partial differential equations, Ma-Trudinger-Wang curvature, Optimal transport, cut locus.. Mathematics Subject Classification: 35J60, 53A0. Citation: Cédric Villani. Regularity of optimal transport and cut locus: From nonsmooth analysis to geometry to smooth analysis. Discrete & Continuous Dynamical Systems - A, 2011, 30 (2) : 559-571. doi: 10.3934/dcds.2011.30.559 M. Castelpietra and L. Rifford, Regularity properties of the distance function to conjugate and cut loci for viscosity solutions of Hamilton-Jacobi equations and applications in Riemannian geometry,, ESAIM Control Optim. Calc. Var., 16 (2010), 695. doi: 10.1051/cocv/2009020. Google Scholar D. Cordero-Erausquin, R. J. McCann and M. Schmuckenschläger, A Riemannian interpolation inequality à la Borell, Brascamp and Lieb,, Invent. Math., 146 (2001), 219. doi: 10.1007/s002220100160. Google Scholar A. Figalli, Regularity of optimal transport maps [after Ma-Trudinger-Wang and Loeper]., S\'eminaire Bourbaki No. 1009, (1009). Google Scholar A. Figalli and L. Rifford, Continuity of optimal transport maps and convexity of injectivity domains on small deformations of $\mathbb S^n$,, Comm. Pure Appl. Math., 62 (2009), 1670. doi: 10.1002/cpa.20293. Google Scholar A. Figalli, L. Rifford and C. Villani, Necessary and sufficient conditions for continuity of optimal transport maps on Riemannian manifolds,, Preprint, (2010). Google Scholar A. Figalli, L. Rifford and C. Villani, On the Ma-Trudinger-Wang curvature on surfaces,, To appear in, (). Google Scholar A. Figalli, L. Rifford and C. Villani, Nearly round spheres look convex,, preprint, (2009). Google Scholar A. Figalli and C. Villani, Optimal transport and curvature,, Notes for a CIME lecture course in Cetraro, (2008). Google Scholar J. I. Itoh and M. Tanaka, The Lipschitz continuity of the distance function to the cut locus,, Trans. Amer. Math. Soc., 353 (2001), 21. doi: 10.1090/S0002-9947-00-02564-2. Google Scholar Y. Li and L. Nirenberg, The distance function to the boundary, Finsler geometry, and the singular set of viscosity solutions of some Hamilton-Jacobi equations,, Comm. Pure Appl. Math., 58 (2004), 85. doi: 10.1002/cpa.20051. Google Scholar G. Loeper and C. Villani, Regularity of optimal transport in curved geometry: The nonfocal case,, in revision for Duke Math. J., (). Google Scholar X.-N. Ma, N. S. Trudinger and X.-J. Wang, Regularity of potential functions of the optimal transportation problem,, Arch. Ration. Mech. Anal., 177 (2005), 151. doi: 10.1007/s00205-005-0362-9. Google Scholar C. Villani, "Topics in Optimal Transportation,'', vol. \textbf{58} of Graduate Studies in Mathematics, 58 (2003). Google Scholar C. Villani, "Optimal Transport, Old and New,'', Vol. \textbf{338} of Grundlehren der mathematischen Wissenschaften, 338 (2009). Google Scholar Paul Bracken. Connections of zero curvature and applications to nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1165-1179. doi: 10.3934/dcdss.2014.7.1165 Kung-Ching Chang. In memory of professor Rouhuai Wang (1924-2001): A pioneering Chinese researcher in partial differential equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 571-575. doi: 10.3934/dcds.2016.36.571 Dimitra Antonopoulou, Georgia Karali. A nonlinear partial differential equation for the volume preserving mean curvature flow. Networks & Heterogeneous Media, 2013, 8 (1) : 9-22. doi: 10.3934/nhm.2013.8.9 Wilhelm Schlag. Spectral theory and nonlinear partial differential equations: A survey. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 703-723. doi: 10.3934/dcds.2006.15.703 Barbara Abraham-Shrauner. Exact solutions of nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 577-582. doi: 10.3934/dcdss.2018032 Feliz Minhós. Periodic solutions for some fully nonlinear fourth order differential equations. Conference Publications, 2011, 2011 (Special) : 1068-1077. doi: 10.3934/proc.2011.2011.1068 Paul Bracken. Exterior differential systems and prolongations for three important nonlinear partial differential equations. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1345-1360. doi: 10.3934/cpaa.2011.10.1345 Mogtaba Mohammed, Mamadou Sango. Homogenization of nonlinear hyperbolic stochastic partial differential equations with nonlinear damping and forcing. Networks & Heterogeneous Media, 2019, 14 (2) : 341-369. doi: 10.3934/nhm.2019014 Thomas Lorenz. Partial differential inclusions of transport type with state constraints. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1309-1340. doi: 10.3934/dcdsb.2019018 Seyedeh Marzieh Ghavidel, Wolfgang M. Ruess. Flow invariance for nonautonomous nonlinear partial differential delay equations. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2351-2369. doi: 10.3934/cpaa.2012.11.2351 Ali Hamidoǧlu. On general form of the Tanh method and its application to nonlinear partial differential equations. Numerical Algebra, Control & Optimization, 2016, 6 (2) : 175-181. doi: 10.3934/naco.2016007 Dellacherie Stéphane. On the Wang Chang-Uhlenbeck equations. Discrete & Continuous Dynamical Systems - B, 2003, 3 (2) : 229-253. doi: 10.3934/dcdsb.2003.3.229 Yannan Liu, Hongjie Ju. Non-collapsing for a fully nonlinear inverse curvature flow. Communications on Pure & Applied Analysis, 2017, 16 (3) : 945-952. doi: 10.3934/cpaa.2017045 Robert J. McCann. A glimpse into the differential topology and geometry of optimal transport. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1605-1621. doi: 10.3934/dcds.2014.34.1605 Nassif Ghoussoub. A variational principle for nonlinear transport equations. Communications on Pure & Applied Analysis, 2005, 4 (4) : 735-742. doi: 10.3934/cpaa.2005.4.735 Frank Pörner, Daniel Wachsmuth. Tikhonov regularization of optimal control problems governed by semi-linear partial differential equations. Mathematical Control & Related Fields, 2018, 8 (1) : 315-335. doi: 10.3934/mcrf.2018013 Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 Lijun Yi, Zhongqing Wang. Legendre spectral collocation method for second-order nonlinear ordinary/partial differential equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 299-322. doi: 10.3934/dcdsb.2014.19.299 Jiahui Zhu, Zdzisław Brzeźniak. Nonlinear stochastic partial differential equations of hyperbolic type driven by Lévy-type noises. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3269-3299. doi: 10.3934/dcdsb.2016097 Shuhong Chen, Zhong Tan. Optimal interior partial regularity for nonlinear elliptic systems. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 981-993. doi: 10.3934/dcds.2010.27.981
CommonCrawl
Multitree In combinatorics and order theory, a multitree may describe either of two equivalent structures: a directed acyclic graph (DAG) in which there is at most one directed path between any two vertices, or equivalently in which the subgraph reachable from any vertex induces an undirected tree, or a partially ordered set (poset) that does not have four items a, b, c, and d forming a diamond suborder with a ≤ b ≤ d and a ≤ c ≤ d but with b and c incomparable to each other (also called a diamond-free poset[1]). In computational complexity theory, multitrees have also been called strongly unambiguous graphs or mangroves; they can be used to model nondeterministic algorithms in which there is at most one computational path connecting any two states.[2] Multitrees may be used to represent multiple overlapping taxonomies over the same ground set.[3] If a family tree may contain multiple marriages from one family to another, but does not contain marriages between any two blood relatives, then it forms a multitree.[4] Equivalence between DAG and poset definitions In a directed acyclic graph, if there is at most one directed path between any two vertices, or equivalently if the subgraph reachable from any vertex induces an undirected tree, then its reachability relation is a diamond-free partial order. Conversely, in a diamond-free partial order, the transitive reduction identifies a directed acyclic graph in which the subgraph reachable from any vertex induces an undirected tree. Diamond-free families A diamond-free family of sets is a family F of sets whose inclusion ordering forms a diamond-free poset. If D(n) denotes the largest possible diamond-free family of subsets of an n-element set, then it is known that $2\leq \lim _{n\to \infty }D(n){\Big /}{\binom {n}{\lfloor n/2\rfloor }}\leq 2{\frac {3}{11}}$, and it is conjectured that the limit is 2.[1] Related structures A polytree, a directed acyclic graph formed by orienting the edges of an undirected tree, is a special case of a multitree. The subgraph reachable from any vertex in a multitree is an arborescence rooted in the vertex, that is a polytree in which all edges are oriented away from the root. The word "multitree" has also been used to refer to a series–parallel partial order,[5] or to other structures formed by combining multiple trees. References 1. Griggs, Jerrold R.; Li, Wei-Tian; Lu, Linyuan (2010), Diamond-free families, arXiv:1010.5311, Bibcode:2010arXiv1010.5311G. 2. Allender, Eric; Lange, Klaus-Jörn (1996), "StUSPACE(log n) ⊆ DSPACE(log2 n/log log n)", Algorithms and Computation, 7th International Symposium, ISAAC '96, Osaka, Japan, December 16–18, 1996, Proceedings, Lecture Notes in Computer Science, vol. 1178, Springer-Verlag, pp. 193–202, doi:10.1007/BFb0009495. 3. Furnas, George W.; Zacks, Jeff (1994), "Multitrees: enriching and reusing hierarchical structure", Proc. SIGCHI conference on Human Factors in Computing Systems (CHI '94), pp. 330–336, doi:10.1145/191666.191778, S2CID 18710118. 4. McGuffin, Michael J.; Balakrishnan, Ravin (2005), "Interactive visualization of genealogical graphs", IEEE Symposium on Information Visualization, Los Alamitos, California, US: IEEE Computer Society, p. 3, doi:10.1109/INFOVIS.2005.22, S2CID 15449409. 5. Jung, H. A. (1978), "On a class of posets and the corresponding comparability graphs", Journal of Combinatorial Theory, Series B, 24 (2): 125–133, doi:10.1016/0095-8956(78)90013-8, MR 0491356.
Wikipedia
BSc/Gate Class 10 Science Class 11 Biotechnology JEE/NEET JEE/NEET Physics Choose level B.Sc/JAM physics Solid State Important Questions Microbes in Human Welfare Notes Mass Calculator 3 Fraction calculator Class 7 Science Notes Use of units in physics Vertical line test for functions and relation Trigonometry Formulas for class 11 (PDF download) Complex Numbers Formulas What is electrostatics in physics? Gravitation NCERT solutions Question 8.1 Answer the following: (a) You can shield a charge from electrical forces by putting it inside a hollow conductor. Can you shield a body from the gravitational influence of nearby matter by putting it inside a hollow sphere or by some other means? (b) An astronaut inside a small spaceship orbiting around the Earth cannot detect gravity. If the space station orbiting around the Earth has a large size, can he hope to detect gravity? (c) If you compare the gravitational force on the Earth due to the Sun to that due to the Moon you would find that the Sun's pull is greater than the Moon's pull. (You can check this yourself using the data available in the succeeding exercises). However, the tidal effect of the Moon's pull is greater than the tidal effect of Sun. Why? (a) No . It is not possible to body to shield from the gravitational influence of nearby matter. The reason is that gravitational forces unlike electrical forces are independent of the nature of the material medium in which bodies are placed. (b) Yes. If the size of the spaceship orbiting around the earth is large, then the gravitational effect of spaceship may become measurable. (c) The tidal effect depends inversely upon the cube of the distance whereas the gravitational force varies inversely as the square of the distance. As the moon is closer to the earth then sun , so its tidal effect is greater than that that of sun. Choose the correct alternative: (a) Acceleration due to gravity increases/ decreases with increasing altitude. (b) Acceleration due to gravity increases/ decreases with increasing depth (assume the Earth to be a sphere of uniform density). (c) Acceleration due to gravity is independent of the mass of the Earth/mass of the body. (d) The formula $-GMm\left( \frac{1}{{{r}_{2}}}-\frac{1}{{{r}_{1}}} \right)$ is more/ less accurate than the formula $mg({{r}_{2}}-{{r}_{1}})$ for the difference of potential energy between two points $r_2$ and $r_1$ distance away from the center of the Earth. (a) Acceleration due to gravity at an altitude $h$ is given by the formula $g_h=g_0\left( 1-\frac{2h}{R} \right)$ So acceleration due to gravity decreases with increasing altitude. (b) Acceleration due to gravity at depth $d$ is given by the formula $g_d=\left( 1-\frac{d}{R} \right)$ . Thus acceleration due to gravity decreases with increasing depth. (c) Acceleration due to gravity is given as $g=\frac{GM}{R^2}$, where $M$ is the mass of the earth. It is clear that the acceleration due to gravity is independent of the mass $m$ of the body. (d) The formula $-GMm\left( \frac{1}{{{r}_{2}}}-\frac{1}{{{r}_{1}}} \right)$ is more accurate than the formula $mg({{r}_{2}}-{{r}_{1}})$ as value of g varies from place to place. Suppose there existed a planet that went around the sun twice as fast as the earth. What would be its orbital size as compared to that of the earth? Let $T_1$ and $T_2$ be the period of revolution of earth and planet in question.Since planet goes around sun twice as fast as earth. $T_2=\frac{1}{2}T_1$ Orbital size of earth = $r_e = 1 \ AU$ and orbital size of the planet = $r_p = ?$ Now, from Kepler's third law of planetary motion, $\frac{T_{p}^{2}}{T_{e}^{2}}=\frac{r_{p}^{3}}{r_{e}^{3}}$ ${{r}_{p}}={{\left( \frac{{{T}_{p}}}{{{T}_{e}}} \right)}^{{}^{2}/{}_{3}}}{{r}_{e}}={{\left( \frac{\frac{1}{2}{{T}_{e}}}{{{T}_{e}}} \right)}^{{}^{2}/{}_{3}}}\times 1AU$ ${{r}_{p}}={{(0.5)}^{{}^{2}/{}_{3}}}AU=0.63AU$ Io, one of the satellites of Jupiter, has an orbital period of 1.769 days and the radius of the orbit is $4.22 \times 10^8 \ m$. Show that mass of Jupiter is about one-thousandth times that of the sun. (Take 1 year = 365.25 mean solar day) For the satellite of Jupiter, $T_J = 1.769 \ days = 1.769 \times 24 \times 60 \times 60 \ s$ Orbital radius of Jupiter satellite, $R_J = 4.22 \times 10^8 \ m$. Now we know from Keplers law $T_J^2 = \frac {4 \pi ^2 R_J^2}{G M_J}$ Therefore Mass of Jupiter will be given by ${{M}_{J}}=\frac{4{{\pi }^{2}}R_{J}^{3}}{GT_{J}^{2}}$ $\Rightarrow {{M}_{J}}=\frac{4{{\pi }^{2}}\times {{(4.22\times {{10}^{8}})}^{3}}}{G{{(1.769\times 24\times 60\times 60)}^{2}}}$ Orbital period of earth around the sun $T=1 \ year = 365.25 \times 24 \times 60 \times 60 \ seconds$ Orbital radius of earth $R=1.469 \times 10{11}$ m $T^2 = \frac {4 \pi ^2 R^2}{G M_s}$ Therefore Mass of the sun ${{M}_{s}}=\frac{4{{\pi }^{2}}{{R}^{2}}}{G{{T}^{2}}}=\frac{4{{\pi }^{2}}}{G}\frac{{{(1.496\times {{10}^{11}})}^{3}}}{{{(365.25\times 24\times 60\times 60)}^{2}}}$ Comparing Mass of Jupiter and Mass of Sun $\begin{align} & \frac{{{M}_{J}}}{{{M}_{S}}}=\frac{4{{\pi }^{2}}\times {{(4.22\times {{10}^{8}})}^{3}}}{G{{(1.769\times 24\times 60\times 60)}^{2}}}\cdot \frac{G}{4{{\pi }^{2}}}\frac{{{(365.25\times 24\times 60\times 60)}^{2}}}{{{(1.496\times {{10}^{11}})}^{3}}} \\ & \frac{{{M}_{J}}}{{{M}_{S}}}=\frac{1}{1046}\approx \frac{1}{1000} \\ \end{align}$ Hence mass of Jupiter is about one thousandth that of sun. Let us consider that our galaxy consists of $2.5 \times 10^{11}$ stars each of one solar mass. How long will this star at a distance of 50,000 lightyear from the galactic centre take to complete one revolution? Take the diameter of the Milky way to be $10^5$ lightyear and $G = 6.67 \times 10^{-11} \ Nm^2/kg^2$ Given 1 Solar Mass=$2 \times 10^{30} \ Kg$ Mass of galaxy (M)=$2.5 \times 10^{11} \ \text{solar masses} = 2.5 \times 10^{11} \times 2 \times 10^{30} \ Kg = 5 \times 10^{41} \ Kg$ Orbital radius of the star (R) = $50000 \ ly = 50000 \times 9.46 \times 10^{15}= 4.73 \times {{10}^{20}}$ m Now from Kepler's Law $T^2= \frac{{4{\pi ^2}{R^2}}}{{GM}}$ Putting values given in above equation we get ${{T}^{2}}=\frac{4\times 9.87\times {{(4.73\times {{10}^{20}})}^{3}}}{6.67\times {{10}^{-11}}\times 5\times {{10}^{41}}}=1.25275\times {{10}^{32}}s$ $\therefore T=1.12\times {{10}^{16}}s=\frac{1.12\times {{10}^{16}}}{365.25\times 24\times 60\times 60}years=3.54\times {{10}^{8}}years$ (a) If the zero of potential energy is at infinity, the total energy of an orbiting satellite is negative of its kinetic/potential energy. (b) The energy required to launch an orbiting satellite out of Earth's gravitational influence is more/less than the energy required to project a stationary object at the same height (as the satellite) out of Earth's influence. (a) If the zero of potential energy is at infinity, the total energy of an orbiting satellite is negative of its kinetic energy. (b) The energy required to launch an orbiting satellite out of Earth's gravitational influence is less than the energy required to project a stationary object at the same height (as the satellite) out of earth's influence. Does the escape speed of a body from the Earth depend on (a) the mass of the body, (b) the location from where it is projected, (c) the direction of projection, (d) the height of the location from where the body is launched? The escape velocity is given by the formula ${{v}_{e}}=\sqrt{\frac{2GM}{r}}$ It can be easily observed from the formula (a) It is independent of the mass of the body. (b) It does not depend on the location from where a body is projected. (c) It does not depend on the direction of projection of a body. (d) The escape velocity depends upon the gravitational potential at the point from which it is projected and that depends on height also . So Escape velocity depends upon the height of the location from where the body is projected . A comet orbits the Sun in a highly elliptical orbit. Does the comet have a constant (a) linear speed (b) angular speed (c) angular momentum (d) kinetic energy (e) potential energy (f) total energy throughout its orbit? Neglect any mass loss of the comet when it comes very close to the Sun. (a) The linear speed of the comet keeps on changing in accordance with Kepler's second law of planetary motion. When comet is near the sun, its speed is higher. When the comet is far away from the sun, its speed is very less. (b) The angular speed also varies. (c) Comet has constant angular momentum (d) Kinetic energy does not remain constant as Linear speed varies (e) Potential energy varies along the path as the distance of the comet from sun changes continuously. (f) Total energy throughout the orbit remains constant. Which of the following symptoms is likely to afflict an astronaut in space (a) swollen feet (b) swollen face (c) headache (d) orientation problem. (a) No. We know that the legs carry the weight of the body in the normal position due to gravity pull. The astronaut in space is in weightlessness state. Hence, swollen feet may not affect his working. (b) Yes.An astronaut may develop swollen face because of weightlessness state. Swollen face may affect to great extent the seeing/hearing/ smelling/ eating capabilities of the astronaut in space. (c) Yes.Headache is due to mental strain. It will persist whether a person is an astronaut in space or he is on earth. It means headache will have the same effect on the astronaut in space as on a person on earth. (d) Yes. Space also has orientation. We also have the frames of reference in space. Hence, orientation problem will affect the astronaut in space. Question 8.10 In the following two exercises, choose the correct answer from among the given ones. The gravitational intensity at the centre of a hemispherical shell of uniform mass density has the direction indicated by the arrow (see Fig.) (i) a, (ii) b, (iii) c, (iv) 0. To solve this problem let us consider a complete spherical shell. We know that the gravitational potential (V) is constant at all points inside a spherical shell. Therefore, gravitational potential gradient $\left( -\frac{dV}{dr} \right)$ at all points inside a spherical shell is zero. Since gravitational intensity is equal to the negative of gravitational potential gradient, hence the gravitational intensity is zero at all points inside a hollow spherical shell. This indicates that the gravitational force acting on a particle at any point inside spherical shell , will be symmetrically placed. If we remove the upper half of the complete spherical shell, we get the hemispherical shell. Since the gravitational intensity at Center C is zero, the direction of gravitational intensity must be downwards as shown by arrow c. Hence option (iii) is correct. For the above problem, the direction of the gravitational intensity at an arbitrary point P is indicated by the arrow (i) d, (ii), e, (iii) f (iv) g. Using the explanation given in the solution of the previous problem, the direction of the gravitational field intensity at P will be along e. So, option (ii) is correct. A rocket is fired from the earth towards the sun. At what point on its path is the gravitational force on the rocket zero? Mass of sun = $2 \times 10^{30} \ kg$, Mass of earth = $6 \times 10^{24} \ kg$. Neglect the effect of the other planets. Orbital radius of the earth = $1.5 \times 10^{11} \ m$ $M_s=2 \times 10^{30} \ kg$ $M_e=6 \times 10^{24} \ kg$ $r=1.5 \times 10^{11} \ m$ Let m be the mass of the rocket. Let at a distance x from the earth, the gravitational force on the rocket due to sun and the earth are equal and opposite. Therefore distance of the rocket from the sun $=r-x$ Gravitational pull of the earth on the rocket = gravitational pull of the sun on the rocket. $\frac{G{{M}_{S}}m}{{{(r-x)}^{2}}}=\frac{G{{M}_{E}}m}{{{x}^{2}}}\Rightarrow \frac{{{(r-x)}^{2}}}{{{x}^{2}}}=\frac{{{M}_{S}}}{{{M}_{E}}}$ \[\frac{r-x}{x}=\sqrt{\frac{{{M}_{S}}}{{{M}_{E}}}}\Rightarrow \sqrt{\frac{2\times {{10}^{30}}}{6\times {{10}^{24}}}}=\frac{{{10}^{3}}}{\sqrt{3}}\] $\frac{r}{x}-1=\frac{{{10}^{3}}}{\sqrt{3}}$ Solving it for x we get $x=\frac{\sqrt{3}r}{{{10}^{3}}+\sqrt{3}}=\frac{1.732\times {{10}^{11}}\times 1.5}{{{10}^{3}}+1.732}=2.59\times {{10}^{8}}m$ How will you 'weigh the sun', that is estimate its mass? The mean orbital radius of the earth around the sun is $1.5 \times 10^8$ km. The mean orbital radius of the Earth around the Sun $R = 1.5 \times 10^8 \ km = 1.5 \times 10^{11}$ m Time period, $T = 365.25 \times 24 \times 60 \times 60 s $ Let the mass of the Sun be $M$ and that of Earth be $m$. According to law of gravitation $F=G\frac{Mm}{{{R}^{2}}}$ Centripetal force, $F=\frac{m{{v}^{2}}}{R}=mR{{\omega }^{2}}$ Equating both these forces $G\frac{Mm}{{{R}^{2}}}=mR{{\omega }^{2}}$ since $\omega =\frac{2\pi }{T}$ $\therefore M=\frac{4{{\pi }^{2}}{{R}^{3}}}{G{{T}^{2}}}$ $M=\frac{4\times {{(3.14)}^{2}}\times {{(1.5\times {{10}^{11}})}^{3}}}{6.67\times {{10}^{-11}}\times {{(365.25\times 24\times 60\times 60)}^{2}}}$ after calculating we get $M=2.0\times {{10}^{30}}Kg$ A Saturn year is 29.5 times the earth year. How far is the Saturn from the Sun if the Earth is $1.5 \times 10^8$ km away from the sun? $T_s = 29.5 T_e$, $R_e = 1.5 \times 10^8$ km From Kepler's 3rd law, $\frac{T_{s}^{2}}{T_{e}^{2}}=\frac{R_{s}^{3}}{R_{e}^{3}}$ $\begin{align} & {{R}_{s}}={{R}_{e}}{{\left( \frac{{{T}_{s}}}{{{T}_{e}}} \right)}^{{}^{2}/{}_{3}}}=1.5\times {{10}^{8}}{{\left( \frac{29.5{{T}_{e}}}{{{T}_{e}}} \right)}^{{}^{2}/{}_{3}}} \\ & {{R}_{s}}=1.43\times {{10}^{9}}Km \\ \end{align}$ A body weighs 63N on the surface of the earth. What is the gravitational force on it due to the earth at a height equal to half the radius of the earth? Weight of the body = mg = 63N height $h=\frac{R}{2}$ Accleration due to gravity at a height is given by $g_h=g {{\left( \frac{R}{R+h} \right)}^{2}}$ $\frac{{{g}_{h}}}{g}={{\left( \frac{R}{R+h} \right)}^{2}}={{\left( \frac{R}{R+\frac{R}{2}} \right)}^{2}}={{\left( \frac{2}{3} \right)}^{2}}=\frac{4}{9}$ ${{g}_{h}}=\frac{4}{9}g$ $m{{g}_{h}}=\frac{4}{9}mg=\frac{4}{9}\times 63=28N$ Assuming the earth to be a sphere of a uniform mass density, how much would a body weigh half way down to the centre of earth if it weighed 250 N on the surface? [g on the surface of the earth = $9.8 \ m/s^2$] $\text{Weight}= mg=250 \ N $ $d=\frac{R}{2}$ Acceleration due to gravity at a depth $d$ below the surface of earth will be ${{g}_{d}}=g\left( 1-\frac{d}{R} \right)$ Substituting the values ${{g}_{d}}=g\left( 1-\frac{{\scriptstyle{}^{R}/{}_{2}}}{R} \right)=\frac{g}{2}$ Hence, New weight = $m{{g}_{d}}=\frac{mg}{2}=\frac{250}{2}=125N$ A rocket is fired vertically with a speed of 5 km/s from the earth's surface. How far from the earth does the rocket go before returning to the earth? Mass of the earth = $6 \times 10^{24} \ kg$, Mean radius of the earth = $6.4 \times 10^6 \ m$, $G = 6.67 \times 10^{-11} \ Nm^2/kg^2$ Let, v = velocity of the rocket when it is fired from the earth. h = height at which its velocity becomes zero. Total energy of the rocket at the surface of the earth $E_1 =K.E + P.E=\frac{1}{2}m{{v}^{2}}+\left( -\frac{GMm}{R} \right)$ At highest point $K.E.=0$ and $P.E.=\left( -\frac{GMm}{R+h} \right)$ $E_2=\left( -\frac{GMm}{R+h} \right)$ Using the law of conservation of energy, $E_1= E_2$ $\frac{1}{2}m{{v}^{2}}-\left( \frac{GMm}{R} \right)=-\left( \frac{GMm}{R+h} \right)$ $\frac{1}{2}{{v}^{2}}=GM\left( \frac{1}{R}-\frac{1}{R+h} \right)=GM\left( \frac{R+h-R}{R(R+h)} \right)$ Now we know that $g=\frac {GM}{R^2}$ $ \frac{1}{2}{{v}^{2}}=\frac{g{{R}^{2}}h}{R(R+h)}$ $\frac{1}{2}{{v}^{2}}=\frac{gRh}{R+h}$ $ {{v}^{2}}(R+h)=2gRh$ on rearranging above equation we find $h=\frac{{{v}^{2}}R}{2gR-{{v}^{2}}}$ Substituting all the values,we get $h=\frac{{{(5\times {{10}^{3}})}^{2}}\times 6.4\times {{10}^{6}}}{(2\times 9.8\times 6.4\times {{10}^{6}})-{{(5\times {{10}^{3}})}^{2}}}=1.6\times {{10}^{6}}m$ The escape velocity of a projectile on the surface of earth is 11.2 km/s. If a body is projected out with thrice of this speed, find the speed of the body far away from the earth. Ignore the presence of other planets and sun. Given that escape velocity is $v_e = 11.2 km/s$, velocity of projection, $v = 3v_e$ Let $m$ and $v_o$ be the mass and the velocity of the projectile far away from the earth then using law of conservation of energy, $\frac{1}{2}m v_0^2=\frac{1}{2}m{{v}^{2}}-\frac{1}{2}mv_{e}^{2}$ ${{v}_{0}}=\sqrt{{{v}^{2}}-v_{e}^{2}}=\sqrt{{{(3{{v}_{e}})}^{2}}-v_{e}^{2}}=\sqrt{8}{{v}_{e}}$ putting the values we get ${{v}_{0}}=2\sqrt{2}\times 11.2=31.68 \ Km/s$ A satellite orbits the earth at a height of 400 km above the surface. How much energy must be expended to rocket the satellite out of the gravitational influence of the earth? Mass of the satellite is 200 kg, mass of the earth = $6 \times 10^24 \ kg$, radius of the earth = $6.4 \times 10^6 \ m$, $G = 6.67 \times 10^{-11} \ Nm^2/kg^2$ Total energy of the satellite in the orbit is $E=K.E.+P.E.=\frac{1}{2}m{{v}^{2}}-\frac{GMm}{R+h}$ Now Orbital velocity is given by $v=\sqrt{\frac{GM}{R+h}}$ Therefore, Total energy will be given by $E=\frac{1}{2}m\frac{GM}{R+h}-\frac{GMm}{R+h}=-\frac{GMm}{2(R+h)}$ This the energy with which sataellite is bounded with earth Energy expended to rocket the satellite out of the earth's gravitational field = - (T.E. of the satellite) =$\frac{GMm}{2(R+h)} $ Substituting the values, we get $\text{Energy required} =\frac{6.67\times10^{-11} \times 6\times10^{24} \times 200}{2(6.4 \times 10^6 + 4 \times 10^5)}=5.9 \times 10^9 \ J$ Two stars each of 1 solar mass (= $2 \times 10^{30} \ kg$) are approaching each other for a head on collision. When they are at a distance $10^9 \ km$, their speeds are negligible. What is the speed with which they collide? The radius of each star is $10^4 \ km$. assume the stars to remain undisturbed until they collide. Use the known value of G. Mass of each star =$2 \times 10^{30} \ kg$ Initial distance between 2 stars = $10^9 \ km= 10^{12} \ m$ Initial P.E. of the system when they are $10^12 \ m$ apart $U_i= - \frac {GM^2}{r} = - \frac {GM^2}{10^{12}}$ When the stars are just about to collide, distance between their centers= twice the radius of each star Final potential energy of stars when they have just collide, $U_f=-\frac{GM^2}{2R}=-\frac{GM^2}{2 \times 10^7} $ Let v be the speed of the stars,Then Total KE will be given as $KE= \frac {1}{2} Mv^2 + \frac {1}{2} Mv^2=Mv^2$ Change in potential energies of stars $U_i-U_f=\frac{-GM^2}{10^{12}}-\left(\frac{-GM^2}{2\times10^7}\right) $ $U_i-U_f=\frac{GM^2}{2\times10^7}-\frac{GM^2}{10^{12}} \sim eq\frac{GM^2}{2\times10^7} $ since $\frac{GM^2}{2\times10^7}\gg\frac{GM^2}{10^{12}} $ By conservation of energy $Mv^2=\frac{GM^2}{2\times10^7}$ Substituting all the values $v=2.58 \times 10^6 \ m/s$ Two heavy spheres each of mass 100 kg and radius 0.10 m are placed 1.0 m apart on a horizontal table. What is the gravitational force and potential at the mid point of the line joining the centres of the spheres ? Is an object placed at that point in equilibrium? If so, is the equilibrium stable or unstable ? The situation is shown in below figure Let r be the distance between them Gravitational field at the mid point P of the line joining the two spheres is equal and opposite $E= - \frac {GM}{{r/2}^2} + \frac {GM}{{r/2}^2}=0$ Gravitational Potential at the mid point P of the line joining the two spheres $V= - \frac {GM}{r/2} - \frac {GM}{r/2}=-\frac {4GM}{r}$ $V=-2.7 \times 10^{-8} \ J/kg$ Since Gravitational field at the mid point P of the line joining the two spheres is zero, object placed at that point will be in equilibrium. But the equilibrium will be unstable as any change in position will change the effective force in that direction Additional Exercise As you have learnt in the text, a geostationary satellite orbits the earth at a height of nearly 36,000 km from the surface of the earth. What is the potential due to earth's gravity at the site of this satellite ? (Take the potential energy at infinity to be zero). Mass of the earth = $6.0 \times 10^{24} \ kg$, radius = 6400 km. Gravitational potential at a height h above the surface of the earth is given by $V= - \frac {GM}{R+h}$ Substituting all the values , we get $V= -9.4 \times 10^6 \ J/kg$. A star 2.5 times the mass of the sun and collapsed to a size of 12 km rotates with a speed of 1.2 rev. per second. (Extremely compact stars of this kind are known as neutron stars. Certain stellar objects called pulsars belong to this category). Will an object placed on its equator remain stuck to its surface due to gravity ? (mass of the sun = $2 \times 10^{30} \ kg$). A spaceship is stationed on Mars. How much energy must be expended on the spaceship to launch it out of the solar system ? Mass of the space ship = 1000 kg; mass of the sun = $2 \times 10^{30} \ kg$ ; mass of mars = $6.4 \times 10^{23} \ kg$ ; radius of mars = 3395 km; radius of the orbit of mars = $2.28 \times 10^8 \ km$; $G = 6.67 \times 10^{-11} \ Nm^2 kg^{-2}$. A rocket is fired 'vertically' from the surface of mars with a speed of 2 km/s. If 20% of its initial energy is lost due to martian atmospheric resistance, how far will the rocket go from the surface of mars before returning to it ? Mass of mars = $6.4 \times 10^{23} \ kg$; radius of mars = 3395 km; $G = 6.67 \times 10^{-11} \ Nm^2 kg^{-2}$. link to this page by copying the following text Gravitation Class 11 NCERT solutions Chapter 8 Kepler's Law Universal law of gravitation Acceleration due to gravity of earth Gravitation Problems for JEE Gravitation Practice Problems(PDF) Gravitation Numerical Questions Gravitation Class 11 NCERT Solutions Go back to Class 11 Main Page using below links Class 11 Maths Class 11 Physics Class 11 Chemistry Class 11 Biology Note to our visitors :- Thanks for visiting our website. DISCLOSURE: THIS PAGE MAY CONTAIN AFFILIATE LINKS, MEANING I GET A COMMISSION IF YOU DECIDE TO MAKE A PURCHASE THROUGH MY LINKS, AT NO COST TO YOU. PLEASE READ MY DISCLOSURE FOR MORE INFO. Physicscatalyst Our aim is to help students learn subjects like physics, maths and science for students in school , college and those preparing for competitive exams. © 2007-2019 . All right reserved. All material given in this website is a property of physicscatalyst.com and is for your personal and non-commercial use only
CommonCrawl
\begin{document} \title{Totally geodesic submanifolds in the tangent bundle of a Riemannian 2-manifold.} \begin{abstract} We give a full description of totally geodesic submanifolds in the tangent bundle of a Riemannian 2-manifold of constant curvature and present a new class of a cylinder-type totally geodesic submanifolds in the general case. \\[2ex] {\it Keywords:} Sasaki metric, totally ge\-o\-de\-sic submanifolds in the tangent bundle.\\%[1ex] {\it AMS subject class:} Primary 53B25; Secondary 53B20. \end{abstract} \section*{Introduction} Let $(M^n,g)$ be a Riemannian manifold with metric $g$ and $TM^n$ its tangent bundle. S.~Sasaki \cite{Sk} introduced on $TM^n$ a natural Riemannian metric $Tg$. With respect to this metric, all the fibers are totally geodesic and intrinsically flat submanifolds. Probably M.-S. Liu \cite{Liu} was the first who noticed that the base manifold embedded into $TM^n$ by the zero section is totally geodesic, as well. Soon afterwards, Sato K. \cite{St} described geodesics (the totally geodesic submanifolds of dimension 1) in the tangent bundle over space forms. The next step was made by P.Walczack \cite{Wch} who tried to find a non-zero section $\xi:M^n\to TM^n$ such that the image $\xi(M^n)$ is a totally geodesic submanifold. He proved that if $\xi$ is of constant length and $\xi(M^n)$ is totally geodesic, then $\xi$ is a parallel vector field. As a consequence, the base manifold should be reducible. The irreducible case stays out of considerations up to now. A general conjecture stated by A.Borisenko claims that, in irreducible case, the zero vector field is the unique one which generates a totally geodesic submanifold $\xi(M^n)$ or, equivalently, the base manifold is the unique totally geodesic submanifold of dimension $n$ in $TM^n$ transversal to fibers. A dimensional restriction is essential. M.T.K.~Abbassi and the author \cite{Ab-Ym} treated the case of fiber transversal submanifolds in $TM^n$ of dimension $l<n$ and have found some examples of totally geodesic submanifolds of this type. Earlier this problem had been considered in \cite{Ym1}. It is also worthwhile to mention that in the case of \textit{tangent sphere bundle } the situation is different. Sasaki S. \cite{Sk1} described geodesics in the tangent sphere bundle over space forms and Nagy P. \cite{Ng} described geodesics in the tangent sphere bundle over symmetric spaces. The author has given a full description of totally geodesic vector fields on 2-dimensional manifolds of constant curvature \cite{Ym2} and an example of a totally geodesic unit vector field on positively/negatively curved manifolds of non-constant curvature \cite{Ym3}. A full description of 2-manifolds which admit a totally geodesic unit vector field was given in \cite{Ym4}. In this paper we consider a more general problem concerning the description of all possible totally geodesic submanifolds in the \textit{tangent bundle} of Riemannian 2-manifold with a sign-preserving curvature. For the spaces of constant curvature this problem was posed by A.Borisenko in \cite{BY}. In Section \ref{2-dim} we prove the following theorems. \textbf{Theorem 1.}\textit{ Let $M^2$ be Riemannian manifold of constant curvature $K\ne0$. Suppose that $\tilde F^2\subset TM^2$ is a totally geodesic submanifold. Then locally $\tilde F^2$ is one of the following submanifolds: \begin{itemize} \item[(a)] a single fiber $T_qM^2$; \item[(b)] a cylinder-type surface based on a geodesic $\gamma$ in $M^2$ with elements generated by a parallel unit vector field along $\gamma$; \item[(c)] the base manifold embedded into $TM^2$ by zero vector field. \end{itemize}} Remark that the item (b) of Theorem 1 is a consequence of more general result. \textbf{Theorem 2} \textit{Let $M^2$ be a Riemannian manifold of sign-preserving curvature. Suppose that $\tilde F^2\subset TM^2$ is a totally geodesic submanifold having non-transversal intersection with the fibers. Then locally $\tilde F^2$ is a cylinder-type surface based on a geodesic $\gamma$ in $M^2$ with elements generated by a parallel unit vector field along $\gamma$.} Moreover, a general Riemannian manifold $M^n$ admits this class of totally geodesic surfaces in $TM^n$ (see Proposition \ref{Gener}). In Section \ref{3-dim} we prove the following general result. \textbf{Theorem 3. }\textit{Let $M^2$ be a Riemannian manifold with sign-preserving curvature. Then $TM^2$ does not admit a totally geodesic 3-manifold even locally.} \textbf{Acknowledgement. } The author expresses his thanks to professor E. Boeckx (Leuven, Belgium) for useful remarks in discussing the results. \section{Necessary facts about the Sasaki metric} Let $(M^n,g)$ be an $n$-dimensional Riemannian manifold with metric $g$. Denote by $\big<\cdot\,,\cdot\big>$ the scalar product with respect to $g$. The {\it Sasaki metric} on $TM^n$ is defined by the following scalar product: if $\tilde X,\tilde Y$ are tangent vector fields on $TM^n$, then \begin{equation} \label{Eqn1} \big<\big<\tilde X,\tilde Y\big>\big>:=\big<\pi_* \tilde X, \pi_* \tilde Y\big>+\big<K \tilde X,K \tilde Y\big>, \end{equation} where $\pi_*:TTM^n \to TM^n $ is the differential of the projection $\pi:TM^n \to M^n $ and $K: TTM^n \to TM^n$ is the {\it connection map} \cite{Dmb}. The local representations for $\pi_*$ and $K$ are the following ones. Let $(x^1,\dots ,x^n)$ be a local coordinate system on $M^n$. Denote by $\partial_i:=\partial/\partial x^i $ the natural tangent coordinate frame. Then, at each point $q\in M^n$, any tangent vector $\xi$ can be decomposed as $\xi=\xi^i \,\partial_i|_q$. The set of parameters $\{x^1,\dots ,x^n;\,\xi^1,\dots,\xi^n\}$ forms the natural induced coordinate system in $TM^n$, i.e. for a point $Q=(q,\xi )\in TM^n$, with $q\in M^n, \ \ \xi \in T_qM^n$, we have $q=(x^1,\dots ,x^n), \, \xi =\xi ^i\,\partial_i|_q$. The natural frame in $T_{Q}TM^n$ is formed by $$ \tilde\partial_i:=\frac{\partial}{\partial x^i}|_Q, \quad \tilde\partial_{n+i}:=\frac{\partial}{\partial \xi ^i}|_Q $$ and for any $\tilde X\in T_{Q}TM^n$ we have the decomposition $$ \tilde X=\tilde X^i\tilde\partial_i+\tilde X^{n+i}\tilde\partial_{n+i}\ . $$ Now locally, the \textit{horizontal} and \textit{vertical} projections of $\tilde X$ are given by \begin{equation} \label{Eqn2} \begin{array}{l} \pi_* \tilde X|_Q= \tilde X^i\,\partial_i|_q, \\[1ex] K \tilde X|_Q= (\tilde X^{n+i}+ \Gamma^i_{jk}(q)\,\xi^j \tilde X^k)\, \partial_i|_q, \\[1ex] \end{array} \end{equation} where $ \Gamma^i_{jk}$ are the Christoffel symbols of the metric $g$. The inverse operations are called \textit{lifts }. If $ X= X^i\,\partial_i$ is a vector field on $M^n$ then the vector fields on $TM$ given by $$ \begin{array}{l} X^h= X^i\,\tilde\partial_i- \Gamma^i_{jk}\,\xi^j X^k\,\tilde\partial_{n+i},\\[1ex] X^v= X^i\,\tilde\partial_{n+i} \end{array} $$ are called the \textit{horizontal} and \textit{vertical} lifts of $X$ respectively. Remark that for any vector field $ X$ on $M^n$ it holds \begin{equation}\label{Pr} \begin{array}{ll} \pi_* { X}^h= X,& K { X}^h=0, \\[1ex] \pi_* { X}^v=0, & K { X}^v= X. \end{array} \end{equation} There is a natural decomposition $$ T_Q(TM^n)=\mathcal{H}_Q(TM^n)\oplus \mathcal{V}_Q(TM^n), $$ where $\mathcal{H}_Q(TM^n)=\ker K$ is called the \textit{horizontal distribution} and $ \mathcal{V}_Q(TM^n)=\ker \pi_*$ is called the \textit{vertical distribution} on $TM^n$. With respect to the Sasaki metric, these distributions are mutually orthogonal. The vertical distribution is \textit{integrable} and the fibers are precisely its integral submanifolds. The horizontal distribution is \textit{never integrable} except the case of a flat base manifold. {\it For any vector fields $ X, Y$ on $M^n$, the covariant derivatives of various combinations of lifts to the point $Q=(q,\xi) \in TM^n$ can be found by the formulas} \cite{Kow} \begin{equation}\label{Kow} \begin{array}{ll} \tilde \nabla_{X^h}Y^h|_Q = ( \nabla_{ X}Y|_q)^h- \frac{1}{2}( R_q(X,Y)\xi)^v, \ &\tilde \nabla_{X^v}Y^h|_Q = \frac{1}{2}( R_q(\xi , X)Y)^h,\\[2ex] \tilde \nabla_{X^h}Y^v|_Q = ( \nabla_{X}Y|_q)^v+ \frac{1}{2}(R_q(\xi ,Y)X)^h, \ & \tilde \nabla_{ X^v} Y^v|_Q=0, \end{array} \end{equation} {\it where $ \nabla$ and $ R$ are the Levi-Civita connection and the curvature tensor of $M^n$ respectively}. \begin{remark} The formulas \eqref{Kow} are applicable to the \emph{lifts} of vector fields only. A formal application to a general field on tangent bundle may lead to wrong result. For example, $$ \begin{array}{rl} \tilde\nabla_{X^v} (\xi^i (\partial_i)^h) =& X^v(\xi^i)\, \partial_i^h + \xi^i \tilde\nabla_{X^v}\partial_i^h \\ =& X^i \partial_i^h + \xi^i \frac{1}{2} \big(R(\xi, X)\partial_i\big)^h = X^h + \frac{1}{2} \big(R(\xi, X)\xi \big)^h \end{array} $$ and we have an additional term in the formulas. We will use this rule in our calculations without special comments. \end{remark} \section{Local description of 2-dimensional totally geodesic submanifolds in $TM^2$}\label{2-dim} In this section we prove Theorem 1. The proof is given in a series of subsections. Namely, in subsection \ref{Prel} we prove the item (a), in subsection \ref{VF} we prove the item (c) and finally, in subsection \ref{Ruled} we prove Theorem 2 and therefore, the item (b) of Theorem 1. \subsection{Preliminary considerations.}\label{Prel} Let $\tilde F^2$ be a submanifold in $TM^2$. Let $(x^1,x^2;\xi^1,\xi^2)$ be a local chart on $TM^2$. Then locally $\tilde F^2$ can be given by mapping $f$ of the form $$ f: \left\{ \begin{array}{l} x^1=x^1(u^1,u^2),\\[1ex] x^2=x^2(u^1,u^2), \end{array} \right. \quad \begin{array}{l} \xi^1=\xi^1(u^1,u^2),\\[1ex] \xi^2=\xi^1(u^1,u^2), \end{array} $$ where $u^1,u^2 $ are the local parameters on $\tilde F^2$. The Jacobian matrix $f_*$ of the mapping $f$ is of the form {\large $$ f_*= \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} \\[1ex] \frac{\partial \xi^2}{\partial u^1} & \frac{\partial \xi^2}{\partial u^2} \\[1ex] \end{array} \right). $$ } Since $rank \ f_*=2$, we have three \textit{geometrically different} possibilities to achieve the rank, namely {\large $$ \begin{array}{l} (a)\quad \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2} \\[1ex] \end{array} \right)\ne0; \qquad (b)\quad \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} \\[1ex] \end{array} \right)\ne0; \\[4ex] (c)\quad \det \left( \begin{array}{cc} \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} \\[1ex] \frac{\partial \xi^2}{\partial u^1} & \frac{\partial \xi^2}{\partial u^2} \\[1ex] \end{array} \right)\ne0. \end{array} $$ } Without loss of generality we can consider these possibilities in a way that (b) excludes (a), and (c) excludes (a) and (b) restricting the considerations to a smaller neighbourhood or even to an open and dense subset. \textbf{Case (a).} In this case one can locally parameterize the submanifold under consideration as $$ f: \left\{ \begin{array}{l} x^1=u^1,\\[1ex] x^2=u^1, \end{array} \right. \quad \begin{array}{l} \xi^1=\xi^1(u^1,u^2),\\[1ex] \xi^2=\xi^2(u^1,u^2), \end{array} $$ and we can consider the submanifold $\tilde F^2$ as an image of the vector field $\xi(u^1,u^2)$ on the base manifold. Denote $\tilde F^2$ in this case by $\xi(M^2)$. We analyze this case in subsection \ref{VF}. \textbf{Case (b).} In this case one can parameterize the submanifold $F^2$ as $$ f: \left\{ \begin{array}{l} x^1=u^1,\\[1ex] x^2=x^2(u^1,u^2), \end{array} \right. \quad \begin{array}{l} \xi^1=u^2,\\[1ex] \xi^2=\xi^2(u^1,u^2). \end{array} $$ Taking into account that we exclude the case (a) in considerations of the case (b), we should set $$ \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2} \\[1ex] \end{array} \right)= \det \left( \begin{array}{cc} 1 & 0 \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2} \\[1ex] \end{array} \right)=\frac{\partial x^2}{\partial u^2}=0. $$ Therefore, $x^2(u^1,u^2)$ does not depend on $u^2$ and the local representation takes the form $$ f: \left\{ \begin{array}{l} x^1=u^1,\\[1ex] x^2=x^2(u^1), \end{array} \right. \quad \begin{array}{l} \xi^1=u^2,\\[1ex] \xi^2=\xi^2(u^1,u^2).\\[1ex] \end{array} $$ Remark that $\pi(\tilde F^2)= (u^1,x^2(u^1)$ is a regular curve on $M^2$. If we denote this projection by $\gamma(s)$ parameterized by the arc-length parameter and set $u^2:=t$, the local parametrization of $\tilde F^2$ takes the form \begin{equation}\label{r_def} \gamma(s): \left\{ \begin{array}{l} x^1=x^1(s),\\[1ex] x^2=x^2(s), \end{array} \right. \qquad \xi(t,s): \left\{ \begin{array}{l} \xi^1=t,\\[1ex] \xi^2=\xi^2(t,s) \end{array} \right. \end{equation} We can interpret this kind of submanifolds in $TM^2$ as a one-parametric family of smooth vector fields over a regular curve on the base manifold. We will refer to this kind of submanifolds as \textit{ruled submanifolds} in $TM^2$ and analyze their totally geodesic property in subsection \ref{Ruled}. \textbf{Case (c).} It this case a local parametrization of $\tilde F^2$ can be given as $$ f: \left\{ \begin{array}{l} x^1=x^1(u^1,u^2),\\[1ex] x^2=x^2(u^1,u^2), \end{array} \right. \quad \begin{array}{l} \xi^1=u^1,\\[1ex] \xi^2=u^2.\\[1ex] \end{array} $$ Taking into account that we exclude the case (b) considering the case (c), we should suppose $$ \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} \\[1ex] \end{array} \right)= \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] 1 & 0 \\[1ex] \end{array} \right)=-\frac{\partial x^1}{\partial u^2}=0. $$ $$ \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] \frac{\partial \xi^2}{\partial u^1} & \frac{\partial \xi^2}{\partial u^2} \\[1ex] \end{array} \right)= \det \left( \begin{array}{cc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2} \\[1ex] 0 & 1 \\[1ex] \end{array} \right)=\frac{\partial x^1}{\partial u^1}=0. $$ Thus, we conclude $x^1=const$. In the same way, we get $x^2=const$. Therefore, \textit{a submanifold of this kind is nothing else but the fiber, which is evidently totally geodesic and there is nothing to prove.} \subsection{Totally geodesic vector fields}\label{VF} In \cite{Ab-Ym} the author has found the conditions on a vector field to generate a totally geodesic submanifold in the tangent bundle. Namely, let $\xi$ be a vector field on $M^n$. The submanifold $\xi(M^n)$ is totally geodesic in $TM^n$ if and only if for any vector fields $X,Y$ on $M^n$ the following equation holds \begin{equation}\label{Eq1} r(X,Y)\xi+r(Y,X)\xi -\nabla_{h_\xi(X,Y)}\xi=0, \end{equation} where $r(X,Y)\xi=\nabla_X\nabla_Y\xi-\nabla_{\nabla_XY}\xi$ is "half" the Riemannian curvature tensor and $h_\xi(X,Y)=R(\xi,\nabla_X\xi)Y+R(\xi,\nabla_Y\xi )X$. It is natural to rewrite this equations in terms of $\rho$ and $e_\xi$ where $e_\xi$ is a \textit{unit} vector field and $\rho$ is the length function of $\xi$. \begin{lemma} Let $\xi=\rho\,e_\xi$ be a vector field on a Riemannian manifold $M^n$. Then $\xi(M^n)$ is totally geodesic in $TM^n$ if and only if for any vector field $X$ the following equations hold \begin{equation}\label{Eq2} \left\{ \begin{array}{l} Hess_\rho(X,X)-\rho^2\Big(R(e_\xi,\nabla_Xe_\xi)X\Big)(\rho)-\rho\,|\nabla_Xe_\xi|^2=0,\\[2ex] \rho^3\nabla_{R(e_\xi,\nabla_Xe_\xi)X}e_\xi-2X(\rho)\nabla_Xe_\xi-\rho(r(X,X)e_\xi+ |\nabla_Xe_\xi|^2\,e_\xi)=0, \end{array} \right. \end{equation} where $Hess_\rho(X,X)$ is the Hessian of the function $\rho$. \end{lemma} \begin{proof} Indeed, the equation (\ref{Eq1}) is equivalent to \begin{equation}\label{Eq3} r(X,X)\xi=\nabla_{R(\xi,\nabla_X\xi)X}\xi, \end{equation} where $X$ is an arbitrary vector field. Setting $\xi=\rho\,e_\xi$, where $e_\xi$ is a unit vector field, we have $$ \begin{array}{ll} r(X,X)\xi=&\nabla_X\nabla_X(\rho\,e_\xi)-\nabla_{\nabla_XX}(\rho\,e_\xi)=\\[1ex] &\nabla_X(X(\rho)e_\xi+\rho\,\nabla_Xe_\xi)-(\nabla_XX)(\rho)e_\xi-\rho\,\nabla_{\nabla_XX}e_\xi=\\[1ex] &\Big(X(X(\rho))-(\nabla_XX)(\rho)\Big)e_\xi+2X(\rho)\nabla_Xe_\xi+\rho\,r(X,X)e_\xi \end{array} $$ and $$ \nabla_{R(\xi,\nabla_X\xi)X}\xi=\rho^2\Big(R(e_\xi,\nabla_Xe_\xi)X\Big)(\rho)\, e_\xi +\rho^3\nabla_{R(e_\xi,\nabla_Xe_\xi)X}e_\xi. $$ If we remark that $X(X(\rho))-(\nabla_XX)(\rho)\stackrel{def}{=}Hess_\rho(X,X)$ and for a unit vector field $e_\xi$ $$ \big<r(X,X)e_\xi,e_\xi\big>=-|\nabla_Xe_\xi|^2, $$ then we can easily decompose the equation (\ref{Eq3}) into components, parallel to and orthogonal to $e_\xi$, which gives the equations (\ref{Eq2}). \end{proof} \begin{corollary}\label{Cor1} Suppose that $M^n$ admits a totally geodesic vector field $\xi=\rho\,e_\xi$. Then (a) the function $\rho$ has no strong maximums; (b) there is a bivector field $e_0\wedge\nabla_{e_0}e_0$ such that $e_\xi$ is parallel along it. Particulary, if $n=2$ then either $M^2$ is flat or $e_0$ is a geodesic vector field and $\rho$ is linear with respect to the natural parameter along each $e_0$ geodesic line. Moreover, the field $\xi$ makes a constant angle with each $e_0$ geodesic line. \end{corollary} \begin{proof} Indeed, for any unit vector field $\eta$ consider the linear mapping $\nabla_Z\eta|_q : T_qM^n\to \eta^\perp_q$, where $\eta^\perp_q$ is an orthogonal complement to $\eta$ in $T_qM^n$. For dimensional reasons it follows that the kernel of this mapping is not empty. In other words, there exists a (unit) vector field $e_0$ such that $\nabla_{e_0}\eta=0$. Let $e_0$ be a unit vector field such that $\nabla_{e_0}e_\xi=0$. Then from $(\ref{Eq2})_1$ we conclude $$ Hess_\rho(e_0,e_0)=0 $$ at each point of $M^n$. Therefore, the Hessian of $\rho$ can not be positively definite. Moreover, from $(\ref{Eq2})_2$ we see that $r(e_0,e_0)e_\xi=0$, which gives $\nabla_{e_0}\nabla_{e_0}e_\xi-\nabla_{\nabla_{e_0}e_0}e_\xi=-\nabla_{\nabla_{e_0}e_0}e_\xi=0$. Setting $Z=e_0\wedge\nabla_{e_0}e_0$, we get $ \nabla_Ze_\xi=0. $ Suppose now that $n=2$. If $Z\ne0$ then $e_\xi$ is a parallel vector field on $M^2$ which means that $M^2$ is flat. If $Z=0$ then evidently $e_0$ is a geodesic vector field. Since in this case $Hess_\rho (e_0,e_0)=e_0(e_0(\rho))=0$, we conclude that $\rho$ is linear with respect to the natural parameter along each $e_0$ geodesic line. As concerns the angle function $\big<e_0,e_\xi\big>$, we have $$ e_0\big<e_0,e_\xi\big>=\big<\nabla_{e_0}e_0,e_\xi\big>+\big<e_0,\nabla_{e_0}e_\xi\big>=0. $$ \end{proof} Taking into account the Corollary \ref{Cor1}, introduce on $M^2$ a semi-geodesic coordinate system $(u,v)$ such that $e_\xi$ is parallel along $u$-geodesics. Let \begin{equation}\label{metric} ds^2=du^2+b^2(u,v)\,dv^2 \end{equation} be the first fundamental form of $M^2$ with respect to this coordinate system. Denote by $\partial_1$ and $\partial_2$ the corresponding coordinate vector fields. Then the following equations should be satisfied: $$ \nabla_{\partial_1}e_\xi=0, \qquad \partial_1^2(\rho)=0. $$ Introduce the unit vector fields $$ e_1=\partial_1,\qquad e_2=\frac{1}{b}\partial_2. $$ Then the following rules of covariant derivation are valid \begin{equation}\label{Frenet} \begin{array}{ll} \nabla_{e_1}e_1=0,\quad &\nabla_{e_1}e_2=0,\\[1ex] \nabla_{e_2}e_1=-k\,e_2 \quad &\nabla_{e_2}e_2=k\,e_1, \end{array} \end{equation} where $k$ is a (signed) geodesic curvature of $v$-curves. Remark that $$ k=-\frac{\partial_1 b}{b}. $$ With respect to chosen coordinate system, the field $\xi$ can be expressed as \begin{equation}\label{field} \xi=\rho\,(\cos \omega\,e_1+\sin\omega\,e_2), \end{equation} where $\omega=\omega(u,v)$ is an angle function, i.e. $$ e_\xi=\cos \omega\,e_1+\sin\omega\,e_2. $$ Introduce a unit vector field $\nu_\xi$ by $$ \nu_\xi=-\sin \omega\,e_1+\cos\omega\,e_2. $$ Then we can easily find $$ \begin{array}{l} \nabla_{e_1}e_\xi=\partial_1\omega\,\nu_\xi, \\[1ex] \nabla_{e_2}e_\xi=(e_2(\omega)-k)\,\nu_\xi. \end{array} $$ Since $e_\xi$ is parallel along $u$-curves, we conclude that $\partial_1\omega=0$, so that $\omega=\omega(v)$. Now the problem can be formulated as {\it On a Riemannian 2-manifold with the metric (\ref{metric}), find a vector field of the form (\ref{field}) with \begin{equation}\label{cond} \partial^2_1\rho=0 \mbox{ and } \omega=\omega(v) \end{equation} satisfying the equation (\ref{Eq3}).} \begin{lemma}\label{tgcond1} Let $M^2$ be a Riemannian 2-manifold with the metric (\ref{metric}) and $\xi$ be a local vector field on $M^2$ satisfying (\ref{cond}). Then $\xi$ is totally geodesic if and only if \begin{equation}\label{tgEqn1} \begin{array}{ll} \nabla_{e_2}\nabla_{e_2}\xi-(k+c \,K)\nabla_{e_1}\xi=0,\\[1ex] \nabla_{e_1}\nabla_{e_2}\xi+\nabla_{e_2}\nabla_{e_1}\xi+(k+c\, K)\nabla_{e_2}\xi=0, \end{array} \end{equation} or in a scalar form \begin{equation}\label{tgEqn2} \begin{array}{l} \left\{ \begin{array}{l} e_2(e_2(\rho))-(k+c\,K)\,e_1(\rho)=\rho\lambda^2,\\[1ex] e_2(c)=0, \end{array} \right.\\[2ex] \left\{ \begin{array}{l} 2e_1(e_2(\rho))+c\,K\,e_2(\rho)=0,\\[1ex] e_1(c)+c\,(k+c\,K)=0 \end{array} \right. \end{array} \end{equation} where $\lambda:=\big<\nabla_{e_2}e_\xi,\nu_\xi\big>=e_2(\omega)-k$, $c:=\rho^2\lambda=\pm |\,\xi\wedge\nabla_{e_2}\xi|$ and $K$ is the Gaussian curvature of $M^2$. \end{lemma} \begin{proof} Indeed, $$ \begin{array}{l} \nabla_{e_1}\xi=e_1(\rho)\,e_\xi,\\[1ex] \nabla_{e_2}\xi=e_2(\rho)\,e_\xi+\rho\lambda\nu_\xi. \end{array} $$ So, taking into account (\ref{Frenet}) and (\ref{cond}), we have $$ \begin{array}{l} r(e_1,e_1)\xi=\nabla_{e_1}\nabla_{e_1}\xi-\nabla_{\nabla_{e_1}e_1}\xi=e_1(e_1(\rho))\,e_\xi= \partial^2_1\rho\,e_\xi=0,\\[1ex] r(e_1,e_2)\xi=\nabla_{e_1}\nabla_{e_2}\xi-\nabla_{\nabla_{e_1}e_2}\xi=\nabla_{e_1}\nabla_{e_2}\xi,\\[1ex] r(e_2,e_1)\xi=\nabla_{e_2}\nabla_{e_1}\xi-\nabla_{\nabla_{e_2}e_1}\xi=\nabla_{e_2}\nabla_{e_1}\xi+ k\nabla_{e_2}\xi,\\[1ex] r(e_2,e_2)\xi=\nabla_{e_2}\nabla_{e_2}\xi-\nabla_{\nabla_{e_2}e_2}\xi=\nabla_{e_2}\nabla_{e_2}\xi- k\nabla_{e_1}\xi. \end{array} $$ As concerns the right-hand side of (\ref{Eq3}), we have $$ \begin{array}{l} R(\xi,\nabla_{e_1}\xi)e_1=0,\quad R(\xi,\nabla_{e_1}\xi)e_2=0,\\[1ex] R(\xi,\nabla_{e_2}\xi)e_1=\rho^2\lambda\,R(e_\xi,\nu_\xi)e_1=-\rho^2\lambda\,K\,e_2,\\[1ex] R(\xi,\nabla_{e_2}\xi)e_2=\rho^2\lambda\,R(e_\xi,\nu_\xi)e_2=\rho^2\lambda\,K\,e_1. \end{array} $$ Therefore, setting $X=e_1$ in (\ref{Eq3}), we obtain an identity. Setting $X=e_2$, we have $$ \nabla_{e_2}\nabla_{e_2}\xi-k\nabla_{e_1}\xi=\rho^2\lambda\,K\nabla_{e_1}\xi. $$ Setting $X=e_1+e_2$, we obtain $$ r(e_1,e_2)\xi+r(e_2,e_1)\xi=-\rho^2\lambda\,K\,\nabla_{e_2}\xi, $$ which can be reduced to $$ \nabla_{e_1}\nabla_{e_2}\xi+\nabla_{e_2}\nabla_{e_1}\xi+k\nabla_{e_2}\xi=-\rho^2\lambda\, K\nabla_{e_2}\xi. $$ It remains to mention that $$ |\,\xi\wedge\nabla_{e_2}\xi|^2=|\xi|^2\,|\nabla_{e_2}\xi|^2-\big<\xi,\nabla_{e_2}\xi\big>^2= \rho^2(e_2(\rho)^2+\rho^2\lambda^2)-(e_2(\rho)\rho)^2=\rho^4\lambda^2. $$ So, if we set $c=\rho^2\lambda$, we evidently obtain (\ref{tgEqn1}). Moreover, continuing calculations, we see that $$ \begin{array}{rl} \nabla_{e_2}\nabla_{e_2}\xi=\!\!&\Big[e_2(e_2(\rho))-\rho\lambda^2\Big]\,e_\xi+ \Big[e_2(\rho)\lambda+e_2(\rho\lambda)\Big]\,\nu_\xi=\\[1ex] &\Big[e_2(e_2(\rho))-\rho\lambda^2\Big]\,e_\xi+\frac{1}{\rho}e_2(c)\,\nu_\xi,\\[2ex] \nabla_{e_1}\nabla_{e_2}\xi+\nabla_{e_2}\nabla_{e_1}\xi=\!\! &\!\!\Big[e_2(e_1(\rho))+e_1(e_2(\rho))\Big]e_\xi+\Big[e_1(\rho)\lambda+e_1(\rho\lambda)\Big]\nu_\xi=\\[1ex] &\!\!\Big[e_2(e_1(\rho))+e_1(e_2(\rho))\Big]e_\xi+\frac{1}{\rho}e_1(c)\nu_\xi. \end{array} $$ Taking into account that $e_1(e_2(\rho))-e_2(e_1(\rho))=k\,e_2(\rho)$, the equations (\ref{tgEqn1}) can be written as $$ \begin{array}{l} \Big[e_2(e_2(\rho))-\rho\lambda^2\Big]\,e_\xi+\frac{1}{\rho}e_2(c)\,\nu_\xi-(k+cK)e_1(\rho)\,e_\xi=0\\[1ex] \Big[2\,e_1(e_2(\rho))-k\,e_2(\rho)\Big]\,e_\xi+\frac{1}{\rho}e_1(c)\,\nu_\xi+(k+cK)\Big[e_2(\rho)\,e_\xi+ \rho\lambda\,\nu_\xi\Big]=0 \end{array} $$ and after evident simplifications we obtain the equations (\ref{tgEqn2}). \end{proof} \begin{proposition} Let $M^2$ be a Riemannian manifold of constant curvature. Suppose $\xi$ is a non-zero local vector field on $M^2$ such that $\xi(M^2)$ is totally geodesic in $TM^2$. Then $M^2$ is flat. \end{proposition} \begin{proof} Let $M^2$ a Riemannian manifold of constant curvature $K\ne0$. Then the function $b$ in (\ref{metric}) should satisfy the equation $$ -\frac{\partial_{11}b}{b}=K. $$ The general solution of this equation can be expressed in 3 forms: \begin{itemize} \item[(a)] $b(u,v)= A(v)\cos(u/r+\theta(v))$ or $b(u,v)=A(v)\sin(u/r+\theta(v))$ for $K=1/r^2>0$; \item[(b)] $b(u,v)= A(v)\cosh(u/r+\theta(v))$ or $b(u,v)=A(v)\sinh(u/r+\theta(v))$ for $K=-1/r^2<0$; \item[(c)] $b(u,v)= A(v)e^{u/r}$ for $K=-1/r^2<0$; \end{itemize} Evidently, we may set $A(v)\equiv 1$ (making a $v$-parameter change) in each of these cases. The equation $(\ref{tgEqn2})_2$ means that $c$ does not depend on $v$. Since $K$ is constant, the equation $(\ref{tgEqn2})_4$ implies $$ e_2(k)=0. $$ If we remark that $k=-\frac{\partial_{1}b}{b}$ then one can easily find $\theta(v)=const$ in cases $(a)$ and $(b)$. After a $u$-parameter change, the function $b$ takes one of the forms \begin{itemize} \item[(a)] $b(u,v)= \cos(u/r)$ or $b(u,v)=\sin(u/r)$ for $K=1/r^2>0$; \item[(b)] $b(u,v)= \cosh(u/r)$ or $b(u,v)=\sinh(u/r)$ for $K=-1/r^2<0$; \item[(c)] $b(u,v)= e^{u/r}$ for $K=-1/r^2<0$; \end{itemize} From the equation $(\ref{tgEqn2})_4$ we find $$ cK=-\frac{e_1(c)}{c}-k=-\frac{e_1(c)}{c}+\frac{e_1(b)}{b}=e_1(\ln b/c). $$ Suppose first that $ e_2(\rho)\ne0$. Multiplying $(\ref{tgEqn2})_3$ by $e_2(\rho)$ we can easily solve this equation with respect to $e_2(\rho)$ by a chain of simple transformations: $$ \begin{array}{l} 2e_2(\rho)\cdot e_1(e_2(\rho))+e_1(\ln b/c)\cdot [e_2(\rho)^2]=0,\\[1ex] e_1[e_2(\rho)^2]+e_1(\ln b/c)\cdot[e_2(\rho)^2]=0,\\[1ex] \frac{e_1[e_2(\rho)^2]}{e_2(\rho)^2}+e_1(\ln b/c)=0, \\[1ex] e_1[\ln e_2(\rho)^2]+e_1(\ln b/c)=0, \\[1ex] e_1( \ln[e_2(\rho)^2\,b/c])=0 \end{array} $$ and therefore, $e_2(\rho)^2\,b/c=h(v)^2$ or $$ \partial_2\rho=h(v)\sqrt{c\,b}. $$ Since $\rho$ is linear with respect to the $u$-parameter, say $\rho=a_1(v)u+a_2(v)$, then $\partial_2\rho=a_1'u+a_2'$ and therefore $\sqrt{cb}$ is also linear with respect to $u$, namely $\sqrt{cb}=m_1(v)u+m_2(v)=\frac{a_1'}{h}\,u+\frac{a_2'}{h}$. But the functions $c$ and $b$ do not depend on $v$. Therefore $m_1$ and $m_2$ are constants, so $a_1=m_1\int h(v)\,dv, a_2=m_2\int h(v)\, dv$. Thus $$ \sqrt{cb}=m_1u+m_2. $$ Now the function $c$ takes the form $$ c(u)=\frac{(m_1u+m_2)^2}{b} $$ and therefore $$ e_1(c)=\frac{2m_1(m_1u+m_2)}{b}-\frac{(m_1u+m_2)^2\partial_1 b}{b^2}. $$ Substitution into $(\ref{tgEqn2})_4$ gives $$ \frac{2m_1(m_1u+m_2)}{b}-\frac{2(m_1u+m_2)^2\partial_1 b}{b^2}+\frac{(m_1u+m_2)^4}{b^2}K=0 $$ or $$ \frac{(m_1u+m_2)}{b^2}\Big[2m_1b-2(m_1u+m_2)\partial_1b+(m_1u+m_2)^3K \Big]=0. $$ The expression in brackets is an algebraic one and can not be identically zero if $K\ne 0$. Therefore $m_1=m_2=0$ and hence $\rho^2\lambda:= c=0$. But this identity implies $\lambda=0$ or $ \rho=0$. If $\lambda=0$ then $e_\xi$ is a parallel unit vector field and therefore, $M^2$ is flat and we come to a contradiction. Therefore $\rho=0$. \begin{remark} If $K= 0$, we can not conclude that $c=0$. In this case the expression in brackets can be identically zero for $m_1=0$ and $b=const$. And we have $c=m_2=const$. \end{remark} Suppose now that $e_2(\rho)=0$. Then $$ \rho=a_1u+a_2, $$ where $a_1,a_2$ are constants and we obtain the following system \begin{equation}\label{tgEqn3} \begin{array}{l} -(k+cK)\partial_1\rho=\rho\lambda^2,\\[1ex] \partial_2\,c=0, \\[1ex] \partial_1c+c(k+cK)=0. \end{array} \end{equation} If $\partial_1\rho=0$ then immediately $\rho=0$ or $\lambda=0$. The identity $\lambda=0$ implies $K=0$ as above. Therefore, $\rho=0$. Suppose $\partial_1\rho\ne 0$ or equivalently $a_1\ne 0$. Then from $(\ref{tgEqn3})_1$ we get \begin{equation}\label{subs} (k+cK)=-\frac{\rho\lambda^2}{a_1} \end{equation} Since $c=\rho^2\lambda$, from $(\ref{tgEqn3})_2$ we see that $\partial_2\lambda=0$ or $\partial_2\left[\frac{\partial_2\omega+\partial_1b}{b}\right]=0$. Since $b$ does not depend on $v$, we have $\partial_{22}\omega=0$ or equivalently $\partial_2\omega=\alpha=const$. Thus, $\lambda=\frac{\alpha+\partial_1b}{b}$. Now we can find $\partial_1c$ in two ways. First, from $(\ref{tgEqn3})_3$ using (\ref{subs}) and keeping in mind that $c=\rho^2\lambda$: $$ \partial_1c=c\frac{\rho\lambda^2}{a_1}=\frac{\rho^3\lambda^3}{a_1} $$ Second, directly: $$ \partial_1c=2\rho\partial_1\rho\lambda+\rho^2\partial_1\lambda. $$ It is easy to see that $\partial_1\lambda=k\lambda-K$ and hence we get $$ \partial_1c=2a_1\rho\lambda+\rho^2(k\lambda-K). $$ Equalizing, we have $$ 2a_1\rho\lambda+\rho^2(k\lambda-K)-\frac{\rho^3\lambda^3}{a_1}=0 $$ or $$ \frac{\rho}{a_1}\left[ 2a_1^2\lambda+a_1\rho(k\lambda-K)-\rho^2\lambda^3\right]=0. $$ The expression in brackets is an algebraic one and can not be identically zero for $K\ne 0$. Since $\rho\not= 0$, we obtain a contradiction. \begin{remark} We do not obtain a contradiction if $K= 0$, since we have another solution $\lambda=0$ which gives $\partial_1b+\alpha=0$ and hence $b=-\alpha u+m$. \end{remark} \end{proof} We have achieved the result by putting a restriction on the geometry of the base manifold. Putting a restriction on the vector field we are able to achieve a similar result. Recall that a totally geodesic vector field necessarily makes a constant angle with some family of geodesics on the base manifold ( see Corollary \ref{Cor1}). It is not parallel along this family and this fact is essential for its totally geodesic property. Namely, \begin{proposition}\label{Parallel} Let $M^2$ be a Riemannian manifold. Suppose $\xi$ is a non-zero local vector field on $M^2$ which is parallel along some family of geodesics of $M^2$. If $\xi(M^2)$ is totally geodesic in $TM^2$ then $M^2$ is flat. \end{proposition} \begin{remark} Geometrically, this assertion means that if $\xi(M^2)$ is not transversal to the horizontal distribution on $TM^2$ then $\xi(M^2)$ is never totally geodesic in $TM^2$ except when $M^2$ is flat. \end{remark} \begin{proof} Let $M^2$ be a \emph{non-flat} Riemannian manifold and suppose that the hypothesis of the theorem is fulfilled. Then, choosing a coordinate system as in Lemma \ref{tgcond1}, we have $$ \nabla_{e_1}\xi=0 $$ and we can reduce (\ref{tgEqn1}) to \begin{equation}\label{parEqn} \begin{array}{l} \nabla_{e_2}\nabla_{e_2}\xi=0,\\ \nabla_{e_1}\nabla_{e_2}\xi+(k+cK)\nabla_{e_2}\xi=0. \end{array} \end{equation} Now make a simple computation. $$ \begin{array}{l} R(e_2,e_1)\nabla_{e_2}\xi=\nabla_{e_2}\nabla_{e_1}\nabla_{e_2}\xi- \nabla_{e_1}\nabla_{e_2}\nabla_{e_2}\xi-\nabla_{[e_2,e_1]}\nabla_{e_2}\xi=\\[1ex] \nabla_{e_2}\nabla_{e_1}\nabla_{e_2}\xi-k\nabla_{e_2}\nabla_{e_2}\xi= \nabla_{e_2}\nabla_{e_1}\nabla_{e_2}\xi. \end{array} $$ On the other hand, differentiating $(\ref{parEqn})_2$, we find $$ \nabla_{e_2}\nabla_{e_1}\nabla_{e_2}\xi=-e_2(k+cK)\nabla_{e_2}\xi. $$ So we have $$ R(e_2,e_1)\nabla_{e_2}\xi=-e_2(k+cK)\nabla_{e_2}\xi. $$ Therefore, either $\nabla_{e_2}\xi=0$ or $e_2(k+cK)=0$. If we accept the first case we see that $\xi$ is a parallel vector field on $M^2$ and we get a contradiction. If we accept the second case, we obtain $$ R(e_2,e_1)\nabla_{e_2}\xi=0, $$ which means that $\nabla_{e_2}\xi$ belongs to a kernel of the curvature operator of $M^2$. In dimension 2 this means that $M^2$ is flat or, equivalently, $\xi$ is a parallel vector field and we obtain a contradiction, as well. \end{proof} \subsection{Ruled totally geodesic submanifolds in $TM^2$}\label{Ruled} \begin{proposition} Let $M^2$ be a Riemannian manifold of sign-preserving curvature. Consider a ruled submanifold $\tilde F^2$ in $TM^2$ given locally by $$ \gamma(s): \left\{\begin{array}{l} x^1=x^1(s),\\[1ex] x^2=x^2(s), \end{array} \right. \qquad \xi(t,s): \left\{ \begin{array}{l} \xi^1=t,\\[1ex] \xi^2=\xi^2(t,s). \end{array} \right. $$ Then $\tilde F^2$ is totally geodesic in $TM^2$ if $\gamma(s)$ is a geodesic in $M^2$, $$\xi(t,s)=t\,\rho(s)\,e(s),$$ where $e(s)$ is a unit vector field which is parallel along $\gamma$ and $\rho(s)$ is an arbitrary smooth function. \end{proposition} \begin{remark} Geometrically, $\tilde F^2$ is a cylinder-type surface based on geodesic $\gamma(s)$ with elements directed by a unit vector field $e(s)$ parallel along $\gamma(s)$. \end{remark} \begin{proof} Fixing $s=s_0$, we see that $F^2$ meets the fiber over $x^1(s_0),x^2(s_0)$ by a curve $\xi(t,s_0)$. If $F^2$ is supposed to be totally geodesic, then this curve is a straight line on the fiber. Therefore, the family $\xi(t,s)$ should be of the form $$ \xi(t,s): \left\{ \begin{array}{l} \xi^1=t,\\[1ex] \xi^2=\alpha(s)t+\beta(s). \end{array} \right. $$ Introduce two vector fields given along $\gamma(s)$ by \begin{equation}\label{fields1} a=\partial_1+\alpha(s)\partial_2,\quad b=\beta(s)\partial_2. \end{equation} Then we can represent $\xi(t,s)$ as $$ \xi(t,s)=a(s)\,t+b(s). $$ Denote by $\tau$ and $\nu$ the vectors of the Frenet frame of the curve $\gamma(s)$. Denote also by (\,$'$) the covariant derivative of vector fields with respect to the arc-length parameter on $\gamma(s)$. Then $$ \left\{ \begin{array}{l} \tau\,'=k\,\nu,\\ \nu\,'=-k\,\tau. \end{array} \right. $$ Denote by $\tilde\partial_1, \tilde\partial_2$ the $s$ and $t$ coordinate vector fields on $F^2$ respectively. A simple calculation yields $$ \tilde\partial_1={\tau}^h+(\xi')^v,\quad \tilde\partial_2=a^v. $$ One of the unit normal vector fields can be found immediately, namely $\tilde N_1={\nu\,}^h$. Consider the conditions on $F^2$ to be totally geodesic with respect to the normal vector field $\tilde N_1$. Using formulas (\ref{Kow}), $$ \tilde\nabla_{\tilde \partial_1}\tilde N_1=\tilde\nabla_{\tau\,^h+(\xi')^v}\nu\,^h= -k\tau\,^h-\frac12\Big[R(\tau,\nu)\xi\Big]^v+\frac12\Big[R(\xi,\xi')\nu\Big]^h $$ Therefore, $$ \big<\big<\tilde\nabla_{\tilde \partial_1}\tilde N_1,\tilde\partial_2\big>\big>= -\frac12\big<R(\tau,\nu)\xi,a\big>= -\frac12\big<R(\tau,\nu)b,a\big>=0. $$ Since $M^2$ is supposed to be non-flat, it follows $b\wedge a=0$. From (\ref{fields1}) we conclude $b=0$. Thus, $\xi(t,s)=a(s)\,t$. Moreover, $$ \begin{array}{rl} \big<\big<\tilde\nabla_{\tilde \partial_1}\tilde N_1,\tilde\partial_1\big>\big>=&-k-\frac12\big<R(\tau,\nu)\xi,\xi'\big> +\frac12\big<R(\xi,\xi')\nu,\tau\big>=\\[1ex] &-k+\big<R(\xi,\xi')\nu,\tau\big>=-k+t^2\big<R(a,a')\nu,\tau\big>=0 \end{array} $$ identically with respect to parameter $t$. Therefore, $k=0$ and $a\wedge a'=0$. Thus, $\gamma(s)$ is a geodesic line on $M^2$. In addition, $(a\wedge a'=0)\sim (a'=\lambda a)$. Set $a=\rho(s)\, e(s)$, where $\rho=|a(s)|$. Then $(a'=\lambda a)\sim(\rho'\,e+\rho\,e'=\lambda\rho\,e) $, which means that $e'=0$. From this we conclude $$ \xi(t,s)=t\rho(s)\,e(s), $$ where $\rho(s)$ is arbitrary function and $e(s)$ is a unit vector field, parallel along $\gamma(s)$. Therefore, $$ \tilde\partial_1={\tau}^h+t\rho\,'\,e^v,\quad \tilde\partial_2=\rho \,e^v $$ and we can find another unit normal vector field $\tilde N_2=(e^\perp)^v$, where $e^\perp(s)$ is a unit vector field also parallel along $\gamma(s)$ and orthogonal to $e(s)$. For this vector field we have $$ \begin{array}{l} \tilde\nabla_{\tilde \partial_1}\tilde N_2=\tilde\nabla_{\tau\,^h+(\xi')^v}(e^\perp)^v= [(e^\perp)']^v+\frac12\Big[R(\xi,e^\perp)\tau\Big]^h=\frac12t\rho \Big[R(e,e^\perp)\tau\Big]^h,\\[1ex] \tilde\nabla_{\tilde \partial_2}\tilde N_2=0 \end{array} $$ Evidently, $ \big<\big<\tilde\nabla_{\tilde \partial_i}\tilde N_2,\tilde\partial_k\big>\big>=0$ for all $i,k=1,2$. Thus, the submanifold is totally geodesic. \end{proof} The converse statement is true in general. \begin{proposition}\label{Gener} Let $M^n$ be a Riemannian manifold. Consider a cylinder type surface $\tilde F^2\subset TM^n$ parameterized as $$ \big\{\gamma(s),t\,\rho(s)\,e(s)\big\}, $$ where $\gamma(s)$ is a geodesic in $M^n$, $e(s)$ is a unit vector field, parallel along $\gamma$ and $\rho(s)$ is an arbitrary smooth function. Then $\tilde F^2$ is totally geodesic in $TM^n$ and intrinsically flat. \end{proposition} \begin{proof} Indeed, the tangent basis of $\tilde F^2$ is consisted of $$ \tilde \partial_1={\gamma\,'}^h+t\rho\,'e^v,\qquad \tilde \partial_2=\rho\, e^v. $$ By formulas (\ref{Kow}), $$ \begin{array}{l} \displaystyle \tilde \nabla_{\tilde \partial_1}{\tilde \partial_1}=(\nabla_{\gamma\,'}{\gamma\,'})^h+ \frac12\big[R(\gamma\,',\gamma\,')\xi\big]^v=0\,,\\[2ex] \displaystyle \tilde \nabla_{\tilde \partial_1}{\tilde \partial_2}=(\nabla_{\gamma\,'}\rho\,e)^v+ \frac12\big[R(\xi,\rho\,e)\gamma\,'\big]^h=\rho\,'e^v\ \sim \ \tilde\partial_2\,,\\[2ex] \displaystyle \tilde \nabla_{\tilde \partial_2}{\tilde \partial_1}=\frac12\big[R(\xi,\rho\,e)\gamma\,'\big]^h= \frac12\big[R(t\,\rho\,e ,\rho\,e)\gamma\,'\big]^h=0\\[2ex] \displaystyle \tilde \nabla_{\tilde \partial_2}{\tilde \partial_2}=\rho e^v(\rho)\,e^v=0. \end{array} $$ It is easy to find the Gaussian curvature of this submanifold, since it is equal to the sectional curvature of $TM^2$ along the $\tilde\partial_1\wedge\tilde\partial_2$- plane. Using the curvature tensor expressions \cite{Kow}, we find $$ Gauss(\tilde F^2)=\big<\big<\tilde R(\tau\,^h,e^v)e^v,\tau\,^h\big>\big>=\frac14|R(\xi,e)\tau|^2=0. $$ \end{proof} \section{Local description of 3-dimensional totally geodesic submanifolds in $TM^2$}\label{3-dim} \begin{theorem} Let $M^2$ be Riemannian manifold with Gaussian curvature $K$. A totally geodesic submanifold $\tilde F^3\subset TM^2$ locally is either a) a 3-plane in $TM^2=E^4$ if $K=0$, or b) a restriction of the tangent bundle to a geodesic $\gamma\in M^2$ such that $K|_\gamma=0$ if $K\not\equiv0$. If $M^2$ does not contain such a geodesic, then $TM^2$ does not admit 3-dimensional totally geodesic submanifolds. \end{theorem} \begin{proof} Let $\tilde F^3$ be a submanifold it $TM^2$. Let $(x^1,x^2;\xi^1,\xi^2)$ be a local chart on $TM^2$. Then locally $\tilde F^3$ can be given mapping $f$ of the form $$ f: \left\{ \begin{array}{l} x^1=x^1(u^1,u^2,u^3),\\[1ex] x^2=x^2(u^1,u^2,u^3),\\[1ex] \xi^1=\xi^1(u^1,u^2,u^3),\\[1ex] \xi^2=\xi^2(u^1,u^2,u^3),\\[1ex] \end{array} \right. $$ where $u^1,u^2,u^3$ are the local parameters on $\tilde F^3$. The Jacobian matrix $f_*$ of the mapping $f$ is of the form {\large $$ f_*= \left( \begin{array}{ccc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2}& \frac{\partial x^1}{\partial u^3} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2}& \frac{\partial x^2}{\partial u^3} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} & \frac{\partial \xi^1}{\partial u^3} \\[1ex] \frac{\partial \xi^2}{\partial u^1} & \frac{\partial \xi^2}{\partial u^2}& \frac{\partial \xi^2}{\partial u^3} \\[1ex] \end{array} \right). $$ } Since $rank \ f_*=3$, we have two geometrically different possibilities to achieve the rank, namely {\large $$ (a)\quad \det \left( \begin{array}{ccc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2}& \frac{\partial x^1}{\partial u^3} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2}& \frac{\partial x^2}{\partial u^3} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} & \frac{\partial \xi^1}{\partial u^3} \\[1ex] \end{array} \right)\ne0; \quad (b)\ \ \det \left( \begin{array}{ccc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2}& \frac{\partial x^1}{\partial u^3} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} & \frac{\partial \xi^1}{\partial u^3} \\[1ex] \frac{\partial \xi^2}{\partial u^1} & \frac{\partial \xi^2}{\partial u^2}& \frac{\partial \xi^2}{\partial u^3} \\[1ex] \end{array} \right)\ne0. $$ } Without loss of generality we can consider this possibilities in such a way that (b) excludes (a). \textbf{Consider the case (a).} In this case we can locally parameterize the submanifold $F^3$ as $$ f:\ \left\{ \begin{array}{l} x^1=u^1,\\[1ex] x^2=u^2,\\[1ex] \xi^1=u^3,\\[1ex] \xi^2=\xi^2(u^1,u^2,u^3). \end{array} \right. $$ By hypothesis, the submanifold $\tilde F^3$ is totally geodesic in $TM^2$. Therefore, it intersects each fiber of $TM^2$ by a vertical geodesic, i.e. by a straight line. Fix $u_0=(u^1_0, \, u^2_0)$. Then the parametric equation of $\tilde F^3\cap T_{u_0}M^2$ with respect to fiber parameters is $$\left\{ \begin{array}{l} \xi^1=u^3,\\[1ex] \xi^2=\xi^2(u^1_0,u^2_0,u^3). \end{array} \right. $$ On the other hand, this equation should be the equation of a straight line and hence $$\left\{ \begin{array}{l} \xi^1=u^3,\\[1ex] \xi^2=\alpha(u^1_0,u^2_0)\,u^3+\beta(u^1_0,u^2_0), \end{array} \right. $$ where $\alpha(u)=\alpha(u^1,u^2)$ and $\beta(u)=\beta(u^1,u^2)$ some smooth functions on $M^2$. From this viewpoint, after setting $u^3=t$ \textit{the submanifold under consideration can be locally represented as a one-parametric family of smooth vector fields $\xi_t$ on $M^2$} of the form $$ \xi_t(u)=t\,\partial_1+\big(\alpha(u)t+\beta(u)\big)\,\partial_2 $$ with respect to the coordinate frame $\partial_1=\partial/\partial u^1,\,\partial_2=\partial/\partial u^2$. Introduce the vector fields \begin{equation}\label{fields2} a(u)=\partial_1+\alpha(u)\,\partial_2, \quad b(u)=\beta(u)\,\partial_2. \end{equation} Then $\xi_t$ can be expressed as $$ \xi_t(u)=t\,a(u)+b(u). $$ It is natural to denote by $\xi_t(M^2)$ a submanifold $\tilde F^3\subset TM^2$ of this kind. Denote by $\tilde \partial_i \ (i=1,\dots,3)$ the coordinate vector fields of $\xi_t(M^2)$. Then $$ \begin{array}{l} \tilde \partial_1=\big\{1,0,0,t\,\partial_1\alpha+\partial_1\beta\big\},\\[1ex] \tilde \partial_2=\big\{0,1,0,t\,\partial_2\alpha+\partial_2\beta\big\},\\[1ex] \tilde \partial_3=\big\{0,0,1,\alpha\big\}.\\[1ex] \end{array} $$ A direct calculation shows that these fields can be represented as $$ \begin{array}{l} \tilde \partial_1=\partial_1^h+t(\nabla_{\partial_1}\,a)^v+(\nabla_{\partial_1}\,b)^v,\\[1ex] \tilde \partial_2=\partial_2^h+t(\nabla_{\partial_2}\,a)^v+(\nabla_{\partial_2}\,b)^v,\\[1ex] \tilde \partial_3=a^v. \end{array} $$ Denote by $\tilde N$ a normal vector field of $\xi_t(M^2)$. Then $$ \tilde N=(a^\perp)^v+Z_t^h, $$ where $\big<a^\perp,a\big>=0$ and the field $Z_t=Z_t^1\partial_1+Z_t^2\partial_2$ can be found easily from the equations $$ \begin{array}{l} \big<\big<\tilde \partial_i,\tilde N\big>\big>=\big<Z_t,\partial_i\big>+t\big<\nabla_{\partial_i}\,a,a^\perp\big>+\big<\nabla_{\partial_i}\,b,a^\perp\big>=0 \quad (i=1,2) \end{array} $$ Using the formulas (\ref{Kow}), one can find $$ \begin{array}{rl} \tilde\nabla_{\tilde\partial_i}a^v=&\tilde \nabla_{\partial_i^h+t(\nabla_{\partial_i}\,a)^v+(\nabla_{\partial_i}\,b)^v }a^v=\\[1ex] &(\nabla_{\partial_i}a)^v+\frac12\Big[R(\xi_t,a)\partial_i\Big]^h= (\nabla_{\partial_i}a)^v+\frac12\Big[R(b,a)\partial_i\Big]^h. \end{array} $$ If the submanifold $\xi_t(M^2)$ is totally geodesic, then the following equations should be satisfied identically $$ \big<\big<\tilde\nabla_{\tilde\partial_i}\tilde\partial_3,\tilde N\big>\big>=\big<\nabla_{\partial_i}a,a^\perp\big>+ \frac12\big<R(b,a)\partial_i,Z_t\big>=0 $$ with respect to the parameter $t$. To simplify the further calculations, suppose that the coordinate system on $M^2$ is the orthogonal one, so that $\big<\partial_1,\partial_2\big>=0$ and $$ R(b,a)\partial_2=g^{11}K\,|b\wedge a|\partial_1,\quad R(b,a)\partial_1=-g^{22}K\,|b\wedge a|\partial_2, $$ where $K$ is the Gaussian curvature of $M^2$ and $g^{11}, g^{22}$ are the contravariant metric coefficients. Then we have $$ \begin{array}{l} \big<R(b,a)\partial_1,Z_t\big>=-g^{22}K\,|b\wedge a|\big<Z_t,\partial_2\big>=\\[1ex] \hphantom{\big<R(b,a)\partial_1,Z_t\big>=} g^{22}K\,|b\wedge a|\Big(t\big<\nabla_{\partial_2}\,a,a^\perp\big>+\big<\nabla_{\partial_2}\,b,a^\perp\big>\Big),\\[2ex] \big<R(b,a)\partial_2,Z_t\big>=g^{11}K\,|b\wedge a|\big<Z_t,\partial_1\big>= \\[1ex] \hphantom{\big<R(b,a)\partial_1,Z_t\big>=} -g^{11}K\,|b\wedge a|\Big(t\big<\nabla_{\partial_1}\,a,a^\perp\big>+\big<\nabla_{\partial_1}\,b,a^\perp\big>\Big). \end{array} $$ Thus we get the system $$ \left\{\begin{array}{l} g^{22}K|b\wedge a|\big<\nabla_{\partial_2}a,a^\perp\big>t+\big<\nabla_{\partial_1}a,a^\perp\big>+ g^{22}K|b\wedge a|\big<\nabla_{\partial_2}b,a^\perp\big>=0,\\[2ex] g^{11}K|b\wedge a|\big<\nabla_{\partial_1}a,a^\perp\big>t-\big<\nabla_{\partial_2}a,a^\perp\big>+ g^{11}K|b\wedge a|\big<\nabla_{\partial_1}b,a^\perp\big>=0, \end{array} \right. $$ which should be satisfied identically with respect to $t$. As a consequence, we have 3 cases: \begin{itemize} \item[\bf(i)]\quad $K=0,\ \left\{\begin{array}{l}\big<\nabla_{\partial_1}\,a,a^\perp\big>=0, \\[1ex] \big<\nabla_{\partial_2}\,a,a^\perp\big>=0\end{array}\right.;$ \item[\bf(ii)]\quad $K\ne0$,\quad $|b\wedge a|=0,\ \left\{\begin{array}{l}\big<\nabla_{\partial_1}\,a,a^\perp\big>=0, \\[1ex] \big<\nabla_{\partial_2}\,a,a^\perp\big>=0\end{array}\right.;$ \item[\bf(iii)]\quad $K\ne0$,\ \ $|b\wedge a|\ne0$,\ \ $\left\{\begin{array}{l}\big<\nabla_{\partial_1}\,a,a^\perp\big>=0, \\[1ex] \big<\nabla_{\partial_2}\,a,a^\perp\big>=0\end{array}\right., \left\{\begin{array}{l}\big<\nabla_{\partial_1}\,b,a^\perp\big>=0, \\[1ex] \big<\nabla_{\partial_2}\,b,a^\perp\big>=0\end{array}\right.;$ \end{itemize} {Case (i).} In this case the base manifold is flat and we can choose a Cartesian coordinate system, so that the covariant derivation becomes a usual one and we have $$ \left\{ \begin{array}{l} \nabla_{\partial_i}\,a=\big\{0,\partial_i\alpha\big\} \quad(i=1,2)\\[1ex] a^\perp=\big\{-\alpha,1\big\} \end{array} \right. $$ From $\big<\nabla_{\partial_i}\,a,a^\perp\big>=0$ it follows that $\alpha=const$, i.e. $a$ is a parallel vector field. Moreover, in this case $$ \begin{array}{l} \tilde \partial_1=\big\{1,0,0,\partial_1\beta\big\}=\partial_1^h+(\partial_1\,b)^v,\\[1ex] \tilde \partial_2=\big\{0,1,0,\partial_2\beta\big\}=\partial_1^h+(\partial_1\,b)^v,\\[1ex] \tilde \partial_3=\big\{0,0,1,\alpha\big\},\\[1ex] \tilde N=\big\{-\partial_1\beta,-\partial_2\beta,-\alpha,\,1\big\}. \end{array} $$ Now we can find $$ \tilde\nabla_{\tilde\partial_i}\tilde\partial_k=(\nabla_{\partial_i}\partial_k b)^v=\big\{0,0,0,\partial_{ik}\beta\big\} $$ and the conditions $$ \big<\big<\tilde\nabla_{\tilde\partial_i}\tilde\partial_k,\tilde N\big>\big>=0 $$ imply $\partial_{ik}\beta=0$. Thus, $\beta=m_1u^1+m_2u^2+m_0$, where $m_1, m_2, m_0$ are arbitrary constants. As a consequence, the submanifold $\xi_t(M^2)$ is described by parametric equations of the form $$ \left\{ \begin{array}{l} x^1=u^1,\\[1ex] x^2=u^2,\\[1ex] \xi^1=t,\\[1ex] \xi^2=\alpha t+m_1u^1+m_2u^2+m_0 \end{array} \right. $$ and we have a hyperplane in $TM^2=E^4$. {Case(ii).} Keeping in mind (\ref{fields2}), the condition $b\wedge a=0$ implies $b=0$. The conditions $$ \left\{\begin{array}{l}\big<\nabla_{\partial_1}\,a,a^\perp\big>=0, \\[1ex] \big<\nabla_{\partial_2}\,a,a^\perp\big>=0\end{array}\right. $$ imply $\nabla_{\partial_1}\,a=\lambda_1(u)\,a,\ \nabla_{\partial_2}\,a=\lambda_2(u)\,a$. As a consequence, we have $$ \begin{array}{l} \xi_t=t\,a\\[1ex] \tilde \partial_1=\partial_1^h+t(\nabla_{\partial_1}\,a)^v=\partial_1^h+t\lambda_1\,a^v,\\[1ex] \tilde \partial_2=\partial_2^h+t(\nabla_{\partial_2}\,a)^v=\partial_2^h+t\lambda_2\,a^v,\\[1ex] \tilde \partial_3=a^v,\\[1ex] \tilde N=(a^\perp)^v. \end{array} $$ Using formulas (\ref{Kow}), $$ \begin{array}{ll} \tilde\nabla_{\tilde\partial_i}\tilde\partial_k=&\tilde\nabla_{\partial_i^h+t\lambda_i\,a^v}\Big(\partial_k^h+t\lambda_k\,a^v\Big)=\\[2ex] &\tilde\nabla_{\partial_i^h}\partial_k^h+t\lambda_i\tilde\nabla_{a^v}\partial_k^h+\tilde\nabla_{\partial_i^h}(t\lambda_ka^v)+ t^2\lambda_i\lambda_k\tilde\nabla_{a^v}a^v=\\[2ex] &(\nabla_{\partial_i}\partial_k)^h-\frac12\Big[R(\partial_i,\partial_k)\xi_t\Big]^v+t\lambda_i\frac12\Big[R(\xi_t,a)\partial_k\Big]^h+\\[1ex] &t\partial_i(\lambda_k)a^v+t\lambda_k(\nabla_{\partial_i}a)^v+t\lambda_k\frac12\Big[R(\xi_t,a)\partial_i\Big]^h =\\[2ex] &(\nabla_{\partial_i}\partial_k)^h-t\frac12\Big[R(\partial_i,\partial_k)a\Big]^v+t\partial_i(\lambda_k)a^v+t\lambda_k\lambda_i\,a^v. \end{array} $$ Evidently, for $i\ne k$ $$ \big<\big<\tilde\nabla_{\tilde\partial_i}\tilde\partial_k,\tilde N\big>\big>=-t\frac12\big<R(\partial_i,\partial_k)a,a^\perp\big>\ne0, $$ since $M^2$ is non-flat and $a\ne0$. Contradiction. {Case (iii).} The conditions imply $$ \nabla_i a=\lambda_i(u)\, a,\quad \nabla_i b=\mu_i(u)\, a \quad (i=1,2) $$ and we have $$ \begin{array}{l} \xi_t=t\,a+b\\[1ex] \tilde \partial_1=\partial_1^h+(t\lambda_1+\mu_1)\,a^v,\\[1ex] \tilde \partial_2=\partial_1^h+(t\lambda_2+\mu_2)\,a^v,\\[1ex] \tilde \partial_3=a^v,\\[1ex] \tilde N=(a^\perp)^v. \end{array} $$ A calculation as above leads to the identity \begin{multline*} $$ \displaystyle \big<\big<\tilde\nabla_{\tilde\partial_i}\tilde\partial_k,\tilde N\big>\big>=-\frac12\big<R(\partial_i,\partial_k)\xi_t,a^\perp\big>=\\ -t\frac12\big<R(\partial_i,\partial_k)a,a^\perp\big>-\frac12\big<R(\partial_i,\partial_k)b,a^\perp\big>=0 $$ \end{multline*} which can be true if and only if $$ \left\{ \begin{array}{l} \big<R(\partial_i,\partial_k)a,a^\perp\big>=0,\\[1ex] \big<R(\partial_i,\partial_k)b,a^\perp\big>=0. \end{array} \right. $$ The first condition contradicts $K\ne0$. \textbf{Consider the case (b).} In this case the submanifold $\tilde F^3$ can be locally parametrized by $$ \left\{ \begin{array}{l} x^1=u^1,\\[1ex] x^2=x^2(u^1,u^2,u^3)\\[1ex] \xi^1=u^2,\\[1ex] \xi^2=u^3. \end{array} \right. $$ Since we exclude the case (a), we should suppose $$ \det \left( \begin{array}{ccc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2}& \frac{\partial x^1}{\partial u^3} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2}& \frac{\partial x^2}{\partial u^3} \\[1ex] \frac{\partial \xi^1}{\partial u^1} & \frac{\partial \xi^1}{\partial u^2} & \frac{\partial \xi^1}{\partial u^3} \\[1ex] \end{array} \right)= \det \left( \begin{array}{ccc} 1 & 0& 0 \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2}& \frac{\partial x^2}{\partial u^3} \\[1ex] 0 & 1 & 0 \\[1ex] \end{array} \right)=-\frac{\partial x^2}{\partial u^3}=0; $$ $$ \det \left( \begin{array}{ccc} \frac{\partial x^1}{\partial u^1} & \frac{\partial x^1}{\partial u^2}& \frac{\partial x^1}{\partial u^3} \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2}& \frac{\partial x^2}{\partial u^3} \\[1ex] \frac{\partial \xi^2}{\partial u^1} & \frac{\partial \xi^2}{\partial u^2} & \frac{\partial \xi^2}{\partial u^3} \\[1ex] \end{array} \right)= \det \left( \begin{array}{ccc} 1 & 0& 0 \\[1ex] \frac{\partial x^2}{\partial u^1} & \frac{\partial x^2}{\partial u^2}& \frac{\partial x^2}{\partial u^3} \\[1ex] 0 & 0 & 1 \\[1ex] \end{array} \right)=\frac{\partial x^2}{\partial u^2}=0; $$ Therefore, in this case we have a submanifold, which can be parametrized by $$ \left\{ \begin{array}{l} x^1=x^1(s),\\[1ex] x^2=x^2(s)\\[1ex] \xi^1=u^2,\\[1ex] \xi^2=u^3, \end{array} \right. $$ where $s$ is a natural parameter of the regular curve $\gamma(s)=\big\{x^1(s), x^2(s)\big\}$ on $M^2$. Geometrically, a submanifold of this class is nothing else but the restriction of $TM^2$ to the curve $\gamma(s)$. Denote by $\tau$ and $\nu$ the Frenet frame of $\gamma(s)$. It is easy to verify that $$ \tilde \partial_1=\tau\,^h,\quad \tilde \partial_2=\partial_1^v,\quad \tilde \partial_3=\partial_2^v,\quad \tilde N=\nu\,^h. $$ By formulas (\ref{Kow}), for $i=1,2$ $$ \big<\big<\tilde \nabla_{\tilde\partial_{1+i}}\tilde N,\tilde \partial_1 \big>\big>= \big<\big<\tilde \nabla_{\partial_{i}^v}\nu^h,\tau^h \big>\big>=\frac12\big<R(\xi,\partial_i)\nu,\tau\big>=\frac12\big<R(\tau,\nu)\partial_i,\xi\big>=0 $$ for arbitrary $\xi$. Evidently, $M^2$ must be flat along $\gamma(s)$. \end{proof} \end{document}
arXiv
Krieger–Nelson Prize The Krieger–Nelson Prize is presented by the Canadian Mathematical Society in recognition of an outstanding woman in mathematics. It was first awarded in 1995. The award is named after Cecilia Krieger and Evelyn Nelson, both known for their contributions to mathematics in Canada.[1][2] Recipients While the award has largely been awarded to a female mathematician working at a Canadian University, it has also been awarded to Canadian-born or -educated women working outside of the country. For example, Cathleen Morawetz, past president of the American Mathematical Society, and a faculty member at the Courant Institute of Mathematical Sciences (a division of New York University) was awarded the Krieger–Nelson Prize in 1997. (Morawetz was educated at the University of Toronto in Toronto, Canada). According to the call for applications, the award winner should be a "member of the Canadian mathematical community".[3] The recipient of the Krieger–Nelson Prize delivers a lecture to the Canadian Mathematical Society, typically during its summer meeting.[3] • 1995 Nancy Reid • 1996 Olga Kharlampovich • 1997 Cathleen Synge Morawetz • 1998 Catherine Sulem • 1999 Nicole Tomczak-Jaegermann • 2000 Kanta Gupta • 2001 Lisa Jeffrey • 2002 Cindy Greenwood • 2003 Leah Keshet • 2004 Not Awarded • 2005 Barbara Keyfitz • 2006 Penny Haxell • 2007 Pauline van den Driessche • 2008 Izabella Łaba • 2009 Yael Karshon • 2010 Lia Bronsard • 2011 Rachel Kuske • 2012 Ailana Fraser • 2013 Chantal David • 2014 Gail Wolkowicz • 2015 Jane Ye • 2016 Malabika Pramanik • 2017 Stephanie van Willigenburg • 2018 Megumi Harada • 2019 Julia Gordon • 2020 Sujatha Ramdorai • 2021 Anita Layton • 2022 Matilde Lalín • 2023 Johanna G. Nešlehová See also • List of mathematics awards References 1. Krieger–Nelson Prize, Canadian Mathematical Society. 2. "The Krieger–Nelson Prize of the Canadian Mathematical Society", MacTutor History of Mathematics archive, University of St Andrews, retrieved 2020-01-18 3. Call for Applications Krieger–Nelson Prize Lectureship, Canadian Mathematical Society. External links • Krieger–Nelson Prize, Canadian Mathematical Society. Mathematics in Canada Organizations • Banff International Research Station • Canadian Mathematical Society • Centre de Recherches Mathématiques • Fields Institute for Research in Mathematical Sciences • Pacific Institute for the Mathematical Sciences • Tutte Institute for Mathematics and Computing • Centre for Education in Mathematics and Computing Mathematics departments • McGill University • University of Toronto • University of Waterloo Journals • Ars Combinatoria • Canadian Journal of Mathematics • Canadian Mathematical Bulletin • Crux Mathematicorum Competitions • Canadian Mathematical Olympiad • Canadian Open Mathematics Challenge • Canadian Mathematics Competition Awards • Adrien Pouliot Award • Coxeter–James Prize • CRM-Fields-PIMS prize • Jeffery–Williams Prize • Krieger–Nelson Prize
Wikipedia
Julia F. Knight Julia Frandsen Knight is an American mathematician, specializing in model theory and computability theory.[1] She is the Charles L. Huisking Professor of Mathematics at the University of Notre Dame and director of the graduate program in mathematics there.[2] Education Knight did her undergraduate studies at Utah State University, graduating in 1964, and earned her Ph.D. from the University of California, Berkeley in 1972 under the supervision of Robert Lawson Vaught.[1][3] Honors and awards In 2012, she became a fellow of the American Mathematical Society[4] and she was elected to be the 30th president of the Association for Symbolic Logic.[5] She was named MSRI Simons Professor for Fall 2020.[6] In 2014, Knight held the Gödel Lecture, titled Computable structure theory and formulas of special forms. References 1. Faculty profile, Notre Dame, retrieved 2013-10-16. 2. Julia Knight – Named professorships and directorships at Notre Dame Archived 2013-10-17 at the Wayback Machine, retrieved 2013-10-16. 3. Julia F. Knight at the Mathematics Genealogy Project 4. List of AMS Fellows, retrieved 2013-10-16. 5. ASL Newsletter, January 2019. 6. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 2021-06-07. Authority control International • ISNI • VIAF National • France • BnF data • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
SNP by SNP by environment interaction network of alcoholism Selected original research articles from the Third International Workshop on Computational Network Biology: Modeling, Analysis, and Control (CNB-MAC 2016): systems biology Amin Zollanvari1,2 & Gil Alterovitz2,3 Alcoholism has a strong genetic component. Twin studies have demonstrated the heritability of a large proportion of phenotypic variance of alcoholism ranging from 50–80%. The search for genetic variants associated with this complex behavior has epitomized sequence-based studies for nearly a decade. The limited success of genome-wide association studies (GWAS), possibly precipitated by the polygenic nature of complex traits and behaviors, however, has demonstrated the need for novel, multivariate models capable of quantitatively capturing interactions between a host of genetic variants and their association with non-genetic factors. In this regard, capturing the network of SNP by SNP or SNP by environment interactions has recently gained much interest. Here, we assessed 3,776 individuals to construct a network capable of detecting and quantifying the interactions within and between plausible genetic and environmental factors of alcoholism. In this regard, we propose the use of first-order dependence tree of maximum weight as a potential statistical learning technique to delineate the pattern of dependencies underpinning such a complex trait. Using a predictive based analysis, we further rank the genes, demographic factors, biological pathways, and the interactions represented by our SNP \( \times \)SNP\( \times \)E network. The proposed framework is quite general and can be potentially applied to the study of other complex traits. Alcohol dependence is characterized by increasing tolerance to and consumption of alcohol, even in the face of adverse effects [1]. Almost 14% of alcohol consumers in the United States meet the criteria for alcohol dependence at some point in their lifetimes [2]. The consequences of alcohol dependence are severe. Overconsumption of alcohol is known to be a contributing factor to more than 60 diseases, including several types of cancer, and accounts for approximately 2.5 million deaths each year [3]. Alcoholism is very difficult to overcome once it initiates, and thus there has been much interest in preventing the onset of alcoholism altogether [3]. The construction of a genetic model of alcoholism has become increasingly possible with new genetic case–control studies of the disease [2]. Indeed, alcoholism is particularly amenable to a genetic model, as the genetic basis of the disease is strong. Adoption studies have demonstrated that children with alcoholic biological parents are likely to become alcoholics themselves, even if they are reared by adoptive parents in environments with few traces of alcohol [4]. Most adoption and twin studies suggest that 50–80% of variation in the phenotype is due to genetic factors [5]. That different people have different initial levels of tolerance to alcohol and thus different propensities to become physically addicted to it is further evidence of the genetic basis of the disease. That said, the same studies that have pointed to genetic factors have shown that demographic factors such as culture and level of education also contribute to alcoholism [6]. Thus, an effective model of alcoholism should incorporate both demographic and genetic information. There have been several association studies that have sought to identify a small number of susceptibility loci for alcoholism [7]. However, complex traits like alcoholism are commonly underpinned by numerous factors, genetic as well as demographic, each of which has a small effect size [8]. Thus, many genome-wide association (GWA) studies on alcoholism have struggled to pinpoint individual single nucleotide polymorphisms (SNPs) that explain a good portion of the variation in the phenotype; the best odds ratios for individual SNPs reported in [2] were around 2, a relatively low figure. The detected variants with such a small effect size have explained a small portion of heritability. This problem is not only specific to alcoholism but to many other GWA studies commonly referred to as the "missing" heritability problem [9]. Various explanations have been suggested for the missing heritability [9], e.g., existence of rare variants with larger effect size that are not detectable with current genotyping techniques; more variants of small effect size that are not yet detected; and gene-gene (G×G) or gene-environment (G×E) interactions that are not discovered. The latter has resulted in various complementary studies to detect the SNP×SNP or SNP×E interactions in different phenotypes. For example, Jamshidi, et al. conducted a two-SNP interaction analysis and compared Cox' regression models of pairs of SNPs with and without interaction term, i.e., SNP1+SNP2 vs. SNP1+SNP2+(SNP1×SNP2) [10]. For each pair of SNPs, the best model was selected based on the p-value of the likelihood ratio. Similarly, a logistic regression model SNP+E+(SNP×E) was used in [11] to identify a possible interaction between each SNP and the environment. The p-values of the interaction term was used to declare the significance of interaction. Limitations of linear or logistic regression analysis in detecting SNP-SNP interactions have been discussed elsewhere [12]. In particular, when the susceptibility to disease is caused by the interaction among several factors, the number of parameters required to fit a (logistic) regression model increases exponentially. This is not only computationally a challenge for constructing the regression model, but also this results in the quasi-complete separation effect (also known as "empty-cell" effect) in which case the estimate of parameters may not exist [13–15]. Therefore, rather than fitting one single unified regression model of many SNPs, researchers commonly fit many regression models of a pair of SNPs and either combine their results by further analysis (e.g. the gene-level analysis in [11]), or draw conclusion directly based on the results of the many fitted regression models (e.g., [10]). Here, in an effort to discover plausible epistasis, i.e., non-additive SNPs association with alcoholism phenotype, we propose the use of first-order dependence tree of maximum weight. Although this technique has been proposed for the first time by Chow and Liu in [16], but its application in GWAS remains unexplored. This technique not only leads to an intuitive interpretation of detected interactions, but at the same time, provides the maximum likelihood estimate of the joint distribution of SNPs and/or environmental variables given the phenotypic label (case or control). At the core of this network approach is the mutual information of pairs of variables. However, in contrast with other network approaches such as [17–19] that also employ mutual information among SNPs/genes, the knowledge of joint distribution here creates a flow of information across nodes and edges of the network upon which inference is possible. In another words, the detected interactions are unified in a single probabilistic network. Based on the constructed network, we propose complementary analyses to rank the demographic factors, genes, biological pathways of alcoholism and compare our findings to prior domain knowledge. The SNP×SNP×E Network of Alcoholism The Manhattan plot in Fig. 1 shows the significance of association of each SNP from genome-wide association analysis conducted in the available cohort of alcoholism. In this plot, each marker is represented by a dot and the –log10 (p-value) is displayed on the y-axis. Markers above the horizontal black line (p<0.0005) have been used in subsequent analysis for construction of the SNP×SNP×E network of alcoholism (see Methods Section for more details). Figures 2 and 3 provide the full picture of the SNP×SNP×E network and a sub-graph of this network, respectively. Data collection, preprocessing, and the working principle of the model are described in Methods Section. The network has 413 nodes (397 SNPs, 15 environmental factors (Table 1), and one phenotypic variable). An edge from a node (parent node) to another node (child node) indicates the conditional probability of the child node being in a state (homozygous wild-type or BB, heterozygotes or Bb, and homozygous mutant or bb) given the state of the parent node. Note that each node can have either a single parent or two parents, one of which is constantly the phenotypic node with two states (case and control). The 397 SNPs in the network are found in the 21 chromosomal regions that have been linked to alcoholism in previous association or linkage studies (all of which employed datasets and/or statistical methods different from ours). Manhattan plot of raw p-values from genome-wide association analysis (CMH test). Markers above the horizontal black line (p<0.0005) have been used in the iterative network construction. For the actual p-values and ranking of these 652 SNPs, see Additional file 2: Table S1 The SNP×SNP×E network of Alcoholism. The network contains 397 SNPs and 15 demographic variables. The nodes represent variables and an edge between two nodes represents their dependency quantified by conditional probabilities. For the node labels and the complete list of interactions see Additional file 3: Table S2. To enhance the quality of representation, we have removed the "alcoholism" node and the edges from this node to all other nodes A subgraph of the SNP×SNP×E network in Fig. 2. All demographic factors are included, as well as the SNPs of several genes that have multiple SNPs in the network. Each blue box is labeled with the gene on which all of the SNPs within the box are found. The grey box contains all of the demographic factors Table 1 Demographic variables used in the SNP×SNP×E network Figures 2 and 3 confirm the frequent assertion that alcoholism is a byproduct of genetic and demographic factors. Based on Fig. 3, there seem to be a few likely reasons why such a synergy exists between demographic and genetic variables. First, the inclusion of race allowed the network to distinguish between SNPs that increase the risk of alcoholism only in African Americans (AAs) and those that do so only in European Americans (EAs). It is clear from Fig. 3 that there are a large number of SNPs that fit that description. As further evidence of race's role, removal of race from the demographic-genetic classifier results in a decline in area under receiver operating characteristic curve (AUC) of 8.7%, the largest decline occurred by removing any feature from the network. Note that throughout this work, the AUC metric is merely used for ranking purposes (see Methods Section for details). Results of network composition analysis We sought to rank the genes, demographic factors, biological pathways, and the interactions represented in our SNP\( \times \)SNP\( \times \)E network. In prior studies on modeling the gene effect based on SNP level interactions using regression analysis, the test statistic is obtained by summing the chi-squared 1° of freedom statistics within the gene, e.g., see [11]. However, here constructing an MWDT gives us an alternative and more intuitive way to combine the effect of various SNPs in a gene level analysis based on the AUC metric. In this regard, we sought to dissect our network to identify strong associations between alcoholism and genes, demographic variables, biological pathways, and interactions among factors. The results of the analysis (see Methods Section for details) are shown in Table 2. As described next, literature explicitly confirm some of the identified associations, providing further evidence that the network is not spurious. In other cases, we found evidence in the literature suggestive of the validity of associations. A few associations are not corroborated with the domain knowledge, but the general alignment of our results with prior work suggests that insight into the emergence of alcoholism. These associations are worthy candidates for further study. Table 2 (a) The 14 most significant genes (p < 0.01) in the SNP×SNP×E network, including the intergenic set. 221 total genes were considered; (b) The four significant demographic factors (p < 0.05) in SNP×SNP×E network. 15 total demographic factors were considered; (c) The four significant interactions (p < 0.05) in the demographic-genetic model. 427 total interactions were considered Alcohol has a variety of effects on the body; many of these arise from alcohol's activation of receptors in the brain [20]. A number of the genes identified in our analysis have important functions in the brain. In total, 9 of the 13 genes listed in Table 2a (excluding the intergenic set) either have been explicitly associated with alcoholism in the literature or have functional ties to the disease (e.g. are involved in brain activity). Three genes have been explicitly associated with the development of alcoholism. CPE has been identified in prior GWA studies on alcoholism [7], and it encodes the enzyme carboxypeptidase E, which activates neuropeptides [21], proteins crucial to communication among neurons. PKNOX2, which regulates the transcription of other genes and affects anatomical development [22], has been linked to various types of substance abuse in European women [23]. GLT25D2 was identified as related to alcoholism in a GWA study on a dataset that had no samples in common with ours [24]. Five other genes have functional ties to alcoholism and the development of the behavior (Additional file 1, Supplementary Notes, Section 1). While many identified genes were generally in alignment with prior knowledge, further work should be done to understand the associations between alcoholism and the five genes that went uncorroborated in the literature (BLNK, BMPER, PDLIM5, VEPH1, AMPD3). Finally, the high importance of intergenic SNPs in Table 2a is surprising, but similar SNPs have been tied in prior GWA studies to alcoholism [25], and the noncoding RNA that is transcribed from intergenic regions affects gene expression levels in some cases [26]. G×G and G×E Interactions Table 2b and c show demographic variables and interactions with a significant p-value (see Methods Section for details). Some of these factors and interactions are explicitly stated in prior studies. For example, a prior study [27] has demonstrated that alcohol consumption is negatively correlated with both income and educational status, both of which were deemed important demographic factors in Table 2b. The significance of the edge between income and education is sensible as well, as the conditional probability tables of the network indicate that a high level of education may be able to counteract a low level of income with respect to the development of alcoholism, and vice versa. Another prior study [28] provides the reason for the significance of the edge between race and income: there is a much stronger association between income and alcoholism in African Americans than in European Americans. Although no SNP-SNP interaction were deemed significant, the numerous SNP-SNP interactions that connect SNPs on the same gene (see Fig. 3) are reasonable, as SNPs that are closer together are more likely to interact and/or affect the same function [29]. There is also an interesting interaction between race and rs8225 in Table 2c (decline in AUC has p<0.04). While we used an AUC-metric-based approach to highlight this interaction, one may realize the importance of such link by examining the distribution of rs8225 among cases and controls in both races. As presented in Table 3, the distribution of this variant is substantially different between the two race groups in both cases and controls (difference of distribution of AAs and EAs in controls has a p<10−15 and in cases p<10−15 as determined by Cochran-Armitage test [30]). The within race group distribution of this variant is also significantly different between cases and controls (difference of distribution of AAs in controls and cases has a p<0.005 and this difference for EAs has p<0.0002 as determined by Cochran-Armitage test [30]). Another interesting interaction in Table 2c is the interaction of sex and rs5933820 (decline in AUC has p<0.02). While rs5933820 is located on the X chromosome, but its appearance as a significant interaction with gender in the context of alcoholism seems interesting and needs further validation and functional analysis. Table 3 Distribution of rs8225 in the two race groups among cases and controls. The link between this variant and "race" group is determined to be statistically significant (see Table 2c) Biological Pathways Twelve of the 14 biological pathways detected in our analysis (Table 4) have already been linked in the literature, either explicitly or indirectly, to the alcoholism. Two pathways have been explicitly cited for their involvement in the development of alcohol dependence. Fombonne, et al. demonstrated that children with long-term depression are at higher risk for alcohol dependence in adulthood [31]. The binding of GABA receptors, which are neuroactive ligand receptors, was found to be abnormally high in the brains of alcoholics [32]. Evidence in the literature suggests that four pathways may be involved in the emergence of alcoholism. It has been noted that alcohol inhibits the reorganization of the actin cytoskeleton [33]. Chronic exposure to alcohol reduces calcium signaling in response to glutamate receptor stimulation in neuronal cells [34]. Exposure of intestinal Gram negative bacteria to alcohol results in accumulation of acetaldehyde, which in turn increases tyrosine phosphorylation of adherens junction proteins [35]. Treatment of the ventral tegmental area in mice with glial cell line-derived neurotrophic factor activated the MAPK signaling pathway and reduced desire for alcohol [36]. Six pathways do not seem likely to be involved in the onset of alcoholism, but do appear to have links to the behavior (Additional file 1, Supplementary Notes, Section 1). Due to the overall alignment of the results of the analysis with the literature, it is likely that the two pathways that have not yet been explicitly tied in some way to alcoholism (dilated cardiomyopathy and hypertrophic cardiomyopathy) have links to the behavior; further study is required to confirm such associations. Table 4 The 14 significant biological pathways (p < 0.05) in the demographic-genetic model. 186 total pathways were considered The analytical machinery proposed in this study can be potentially used to capture the complex multifactor effects between many genetic and environmental factors, providing a characterization of the underlying biological and environmental mechanism that determines the phenotype. The underlying framework is quite general and we anticipate seeing it applied to the study of other complex traits. The gene-gene-environment interactions are also known as one possible source of the "missing" heritability problem. In this regard, the next natural step is to use the proposed framework to quantify the proportion of the missing heritability explained by identified interactions. Data Collection and Preprocessing We utilized SAGE data [2], which featured 3,829 subjects and considered 948,658 SNPs from across the human genome, as well as several demographic variables. The data included human samples from three prior studies [2]; 30% of the individuals were African Americans and 70% were European Americans. The SAGE dataset includes 1,897 Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) cases and 1,932 alcohol-exposed non-dependents. We used 15 environmental variables (demographic factors) that are listed in Table 1. Several demographic factors were left out, especially ones relating to comorbidities, because their distributions across the cases and controls were heavily imbalanced. All continuous demographic variables in the data (e.g. income) were discretized. We first removed any SNPs out of Hardy-Weinberg equilibrium (P < 0.0001). Hardy-Weinberg equilibrium tests were run separately on the African Americans and the European Americans in order to ensure identification of any SNPs common only in one race out of equilibrium. SNPs with minor allele frequency (MAF) below 0.01 or call rate below 98% were also removed from consideration, leaving a total of 934,128 SNPs. Finally, the 3,776 samples (1909 cases and 1867 controls) with a genotyping rate above 98% were maintained. A Cochran-Mantel-Haenszel (CMH) association test was used to rank the 934,128 SNPs [30]. The association analysis was performed with the software PLINK [37]. The top 652 SNPs (p < 0.0005) were maintained for network construction as detailed in the next few subsections. Maximum-weight Dependence Tree (MWDT) First-order dependence tree of maximum weight is proposed initially by Chow and Liu [16] and further developed and evaluated by Friedman et al. [38]. Although there is no biological evidence that dependence between variables (genes or SNPs) follow a tree structure, but limitations on the number of available sample points compared to the complexity of the problem in hand require the joint distribution of variables be approximated by some simplifying assumptions. In this regard, tree dependence assumption is made to approximate a n th order joint probability distribution by a product of n-1 s-order distributions. To understand the working principle in the context of GWAS, let P(x) denote the probability mass function of a random vector x. The mutual information between two variables (here SNP1 and SNP2) is given by $$ I\left({\mathrm{SNP}}_1,{\mathrm{SNP}}_2\right)={\displaystyle \sum_{{\mathrm{SNP}}_1,{\mathrm{SNP}}_2}} P\left({\mathrm{SNP}}_1,{\mathrm{SNP}}_2\right)\ \log \left(\frac{P\left({\mathrm{SNP}}_1,{\mathrm{SNP}}_2\right)}{P\left({\mathrm{SNP}}_1\right) P\left({\mathrm{SNP}}_2\right)}\right) $$ Intuitively, I(SNP1, SNP2) measures the amount of information that SNP1 carries about SNP2 and vice versa. In a graphical representation of dependency among SNPs, we assume the dependencies have a tree structure (meaning each node has a single parent and one node (the root) has no parent), and assign to every edge of the tree an \( I\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\right) \). Then the tree with the maximum weight is the one that maximizes \( {\displaystyle {\sum}_{i=1}^n I\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\right)} \) where m i denotes the parent node of node i and n is the number of SNPs under study. Note that there is no difficulty to maximize \( {\displaystyle {\sum}_{i=1}^n I\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\right)} \) without considering the class labels; however, doing so leads to a static network that may not differentiate one class from another. In other words, it is not possible to use the network as an inferential tool. The technique originally proposed in [16] resolves this problem by stratifying the samples at the outset and constructing one network for each class. Nevertheless, having a different network of interactions for each class will not only make the inference a more difficult and elusive task, but may not have a biological ground either. In a case–control study, we can define a "class" variable C to measure the amount of information between SNPs given the phenotype (case or control). In this case, the maximum weight first-order dependence tree becomes the one with the maximum \( {\displaystyle {\sum}_{i=1}^n I\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\Big|\mathrm{C}\right)} \). By the first-order tree assumption on the structure of dependencies between SNPs, one can write the joint distribution between all SNPs given C as $$ P\left({\mathrm{SNP}}_1,{\mathrm{SNP}}_2,\dots, {\mathrm{SNP}}_n\Big|\mathrm{C}\right)={\displaystyle \prod_{i=1}^n} P\left({\mathrm{SNP}}_i\Big|{\mathrm{SNP}}_{m_i},\mathrm{C}\right) $$ This decomposition of joint probability to product of "second-order" distributions or the distribution of first-order tree dependence leads to an algorithm that can "grow" the tree in polynomial time (Kruskal algorithm detailed in [16]). In practice, the knowledge of conditional probability distributions is not available, and they must be estimated from data. Nevertheless, it can be shown that due to decomposition of joint probability distributions as mentioned above, the strategy that finds the tree with maximum weights is also the maximum likelihood estimate (MLE) of the joint distribution. In other words, finding the tree with maximum \( {\displaystyle {\sum}_{i=1}^n\widehat{I}\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\Big|\mathrm{C}\right)} \), with \( \widehat{I}\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\Big|\mathrm{C}\right) \) being the sample estimate of \( I\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\Big|\mathrm{C}\right) \), is equivalent to the MLE of the joint distribution of SNPs, P(SNP1, SNP2, …, SNP n |C), under the first order dependence tree structure (see [16]). This implies that if the true dependence between SNPs has a tree structure, then as the sample size increases, the estimated trees converge to the true tree with probability one. For further details on estimating \( I\left({\mathrm{SNP}}_i,{\mathrm{SNP}}_{m_i}\Big|\mathrm{C}\right) \), see Additional file 1, Supplementary Notes, Section 2. Another interesting feature of MWDT is that approximating and estimating the joint distribution of SNPs create a flow of information among nodes of the network. As opposed to other network approaches based on mutual information [17–19], this interesting property of the network gives us the ability to employ the network as an inferential tool. For example, for an observation of unknown class, one can assign a case label if $$ {\displaystyle \prod_{i=1}^n} P\left({\mathrm{SNP}}_i\Big|{\mathrm{SNP}}_{m_i},\mathrm{C}=\mathrm{case}\right) > {\displaystyle \prod_{i=1}^n}\left({\mathrm{SNP}}_i\Big|{\mathrm{SNP}}_{m_i},\mathrm{C}=\mathrm{control}\right) $$ AUC in Ranking Networks of Interactions From the previous section, we have to note that the MWDT guarantees the maximum likelihood estimate of the joint distribution given the true tree dependency among a set of given SNPs. However, for a set of SNPs of size n (here 652 SNPs selected as described before), there will be 2n-1 potential maximum weight networks that can be constructed on any subset of n variables. Of course one may choose to grow the tree on all n SNPs but here we propose a complementary step to further narrow down the list of potential genetic factors used in the proposed network of alcoholism. To do so, we use the network as a classifier and use the AUC to rank a set of potential networks (see next subsection) and choose the one with the highest AUC. Unless otherwise stated, we employ 3-fold cross-validation procedure to compute AUCs. Nevertheless, since for the initial dimensionality reduction step, we use the CMH test on the full training data, we shall not interpret AUC as the predictive ability of our constructed network on a subset of SNPs and/or other factor. In other words, the use of AUC here is merely a measure to rank constructed sub-networks of interactions. Ranking mechanism To construct the optimal network of interactions, two approaches were employed: one is a backward sequential iterative approach described below, and the other is an approach based on a combination of linkage disequilibrium (LD) analysis [39] and the backward iterative approach. In the (backward) iterative approach, the MWDT was first trained with the remaining SNPs and the 15 demographic variables as part of the network. In each subsequent iteration, the 50 SNPs with the largest CMH p-values were removed and a new network was constructed using the reduced list of SNPs. The best network was the one with the highest AUC in differentiating cases from controls. The LD analysis-based approach sought to eliminate redundant SNPs. LD analysis was performed and SNPs that were strongly linked (i.e. frequently co-occurred in both the cases and the controls) were grouped into bins. The approach outlined by Carlson, et al. [40], with the r2 threshold lowered from 0.8 to 0.4, was used to produce a single tag SNP for each LD bin. Only the tag SNPs were maintained, and the iterative approach was applied to them. This approach ensures that multiple SNPs that are proxies due to low LD distance are not selected. The tag SNP acts as a proxy for all SNPs in that region. The best networks from the two approaches were compared, and the one with the highest AUC was selected as the SNP×SNP×E network. Analysis of network composition To study the gene level interactions with the phenotype based on SNP level variations, we enumerate all genes with at least one SNP in the network. For each gene, we construct a sub-network of SNPs involved in the full SNP×SNP×E network located on that gene and record the AUC of a newly constructed sub-network. We consider race and sex as part of each sub-network. This would unlock the full potential of race- or sex-specific SNPs. In a sense, this analysis is similar to the adjustment for sex and age in the classical regression analysis. We next considered important demographic features. To evaluate the importance of each demographic factor, we calculated the decline in resubstitution AUC (AUC on the training set) upon removal of that factor and all edges connected to it from the full SNP×SNP×E network. Resubstitution was used because the response of cross-validation AUC to minor changes is relatively imprecise due to larger variance of cross-validation estimators [41]. We used the Molecular Signatures Database [42] to determine the lists of genes related to 186 pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) [43]. For each KEGG pathway, we recorded the AUC of the corresponding sub-network constructed using SNPs in the full network that are within the pathways's genes, as well as race and sex. Finally, to detect most important interactions, we successively removed each edge in the full SNP×SNP×E network and recorded the decline in AUC. The analysis left us with an AUC for each gene, pathway, and a decline in AUC for each demographic feature and interaction. Rather than reporting the actual AUCs, which here is merely used for ranking purposes, we calculated a p-value associated with each AUC. Although here ranking based on AUC or p-value leads to the same result, we use the p-value threshold of 0.05 (non-adjusted) to narrow down the list. To determine a p-value for each gene- or pathway-specific network, we constructed 1,000 networks, each with the same number of nodes for which the AUC in question was calculated, and determined their AUCs. The set of genetic features for each of the 1,000 networks was drawn randomly from the background set of SNPs. Race and sex were included as features in all 1,000 networks in order to ensure parity with the procedure used to generate the gene- or pathway-specific network. The list of 1,000 random AUCs enabled the calculation of a p-value for the AUC in question. To determine the statistical significance of each decline in AUC (used for quantifying the importance of each demographic variable and the interactions in the SNP×SNP×E network), we used the same background set to construct 1,000 random networks with the same set of demographic factors and the same number of SNPs as in the SNP×SNP×E network. For each randomly generated model, we recorded the decline in AUC upon removal of a random SNP (in the case of the declines in AUC for demographic factors) or a random edge (in the case of the declines in AUC for interaction). The 1,000 random declines in AUC enabled the calculation of a p-value for the decline in AUC of interest. Each gene, demographic factor, pathway, and interaction relationship was now associated with a p-value. AUC: Area under receiver operating characteristic curve CMH: Cochran-Mantel-Haenszel EA: GWA: genome-wide association Kyoto Encyclopedia of Genes and Genomes Linkage disequilibrium MAF: Minor allele frequency MLE: Maximum likelihood estimate MWDT: Maximum-weight Dependence Tree SNP: Single nucleotide polymorphisms Li TK, Hewitt BG, Grant BF. The Alcohol Dependence Syndrome, 30 years later: a commentary. the 2006 H. David Archibald lecture. Addiction. 2007;102(10):1522–30. Bierut LJ, Agrawal A, Bucholz KK, Doheny KF, Laurie C, Pugh E, Fisher S, Fox L, Howells W, Bertelsen S, et al. A genome-wide association study of alcohol dependence. Proc Natl Acad Sci U S A. 2010;107(11):5082–7. World Health Organization: Global Status Report on Alcohol and Health 2011. Geneva 2011 Agrawal A, Lynskey MT. Are there genetic influences on addiction: evidence from family, adoption and twin studies. Addiction. 2008;103(7):1069–81. Knopik VS, Heath AC, Madden PA, Bucholz KK, Slutske WS, Nelson EC, Statham D, Whitfield JB, Martin NG. Genetic effects on alcohol dependence risk: re-evaluating the importance of psychiatric and other heritable risk factors. Psychol Med. 2004;34(8):1519–30. Prescott CA, Kendler KS. Genetic and environmental contributions to alcohol abuse and dependence in a population-based sample of male twins. Am J Psychiatry. 1999;156(1):34–40. Edenberg HJ, Koller DL, Xuei X, Wetherill L, McClintick JN, Almasy L, Bierut LJ, Bucholz KK, Goate A, Aliev F, et al. Genome-wide association study of alcohol dependence implicates a region on chromosome 11. Alcohol Clin Exp Res. 2010;34(5):840–52. Zondervan KT, Cardon LR. The complex interplay among factors that influence allelic association. Nat Rev Genet. 2004;5(2):89–100. Manolio TA, Collins FS, Cox NJ, Goldstein DB, Hindorff LA, Hunter DJ, McCarthy MI, Ramos EM, Cardon LR, Chakravarti A, et al. Finding the missing heritability of complex diseases. Nature. 2009;461(7265):747–53. Jamshidi M, Fagerholm R, Khan S, Aittomaki K, Czene K, Darabi H, Li J, Andrulis IL, Chang-Claude J, Devilee P, et al. SNP-SNP interaction analysis of NF-kappaB signaling pathway on breast cancer survival. Oncotarget. 2015;6(35):37979–94. Wei S, Wang LE, McHugh MK, Han Y, Xiong M, Amos CI, Spitz MR, Wei QW. Genome-wide gene-environment interaction analysis for asbestos exposure in lung cancer susceptibility. Carcinogenesis. 2012;33(8):1531–7. Heidema AG, Boer JM, Nagelkerke N, Mariman EC, van der AD, Feskens EJ. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases. BMC Genet. 2006;7:23. Albert A, Anderson JA. On the existence of maximum likelihood estimates in logistic regression models. BMJ. 1984;71:1–10. Lin HY, Wang W, Liu YH, Soong SJ, York TP, Myers L, Hu JJ. Comparison of multivariate adaptive regression splines and logistic regression in detecting SNP-SNP interactions and their application in prostate cancer. J Hum Genet. 2008;53(9):802–11. Webb MC, Wilson JR, Chong J. An Analysis of Quasi-complete Binary Data with Logistic Models: Applications to Alcohol Abuse Data. J Data Science. 2004;2:273–85. Chow CK, Liu CN. Approximating Discrete Probability Distributions with Dependence Trees. IEEE Trans Inf Theory. 1968;14:462–7. Butte AJ, Kohane IS. Mutual information relevance networks: functional genomic clustering using pairwise entropy measurements. Pac Symp Biocomp. 2000;5:418–29. Lavender NA, Rogers EN, Yeyeodu S, Rudd J, Hu T, Zhang J, Brock GN, Kimbro KS, Moore JH, Hein DW, et al. Interaction among apoptosis-associated sequence variants and joint effects on aggressive prostate cancer. BMC Med Genet. 2012;5:11. Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Dalla Favera R, Califano A. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC bioinformatics. 2006;7 Suppl 1:S7. Korpi ER. Role of GABAA receptors in the actions of alcohol and in alcoholism: recent advances. Alcohol Alcohol. 1994;29(2):115–29. Hook V, Funkelstein L, Lu D, Bark S, Wegrzyn J, Hwang SR. Proteases for processing proneuropeptides into peptide neurotransmitters and hormones. Annu Rev Pharmacol Toxicol. 2008;48:393–423. Imoto I, Sonoda I, Yuki Y, Inazawa J. Identification and characterization of human PKNOX2, a novel homeobox-containing gene. Biochem Biophys Res Commun. 2001;287(1):270–6. Chen X, Cho K, Singer BH, Zhang H. The nuclear transcription factor PKNOX2 is a candidate gene for substance dependence in European-origin women. PLoS One 2011;6:e16002. Agrawal A, Lynskey MT, Todorov AA, Schrage AJ, Littlefield AK, Grant JD, Zhu Q, Nelson EC, Madden PA, Bucholz KK, et al. A candidate gene association study of alcohol consumption in young women. Alcohol Clin Exp Res. 2011;35(3):550–8. Treutlein J, Cichon S, Ridinger M, Wodarz N, Soyka M, Zill P, Maier W, Moessner R, Gaebel W, Dahmen N, et al. Genome-wide association study of alcohol dependence. Arch Gen Psychiatry. 2009;66(7):773–84. Rusk N. Noncoding transcripts as expression boosters. Nat Methods. 2010;7(12):947. Midanik LT, Clark WB. The demographic distribution of US drinking patterns in 1990: description and trends from 1984. Am J Public Health. 1994;84(8):1218–22. Barr KEM, Farrell MP, Barnes GM, Welte JW. Race, Class, and Gender Differences in Substance Abuse: Evidence of Middle-Class/Underclass Polarization among Black Males. Soc Probl. 2004;14:314–27. Sebastiani P, Ramoni MF, Nolan V, Baldwin CT, Steinberg MH. Genetic dissection and prognostic modeling of overt stroke in sickle cell anemia. Nat Genet. 2005;37(4):435–40. Agresti A. Categorical Data Analysis. 2nd ed. New York: Wiley; 2002. Fombonne E, Wostear G, Cooper V, Harrington R, Rutter M. The Maudsley long-term follow-up of child and adolescent depression. 2. Suicidality, criminality and social dysfunction in adulthood. Br J Psychiatry. 2001;179:218–23. Tran VT, Snyder SH, Major LF, Hawley RJ. GABA receptors are increased in brains of alcoholics. Ann Neurol. 1981;9(3):289–92. Dai Q, Pruett SB. Ethanol suppresses LPS-induced Toll-like receptor 4 clustering, reorganization of the actin cytoskeleton, and associated TNF-alpha production. Alcohol Clin Exp Res. 2006;30(8):1436–44. Gruol DL, Parsons KL. Chronic alcohol reduces calcium signaling elicited by glutamate receptor stimulation in developing cerebellar neurons. Brain Res. 1996;728(2):166–74. Purohit V, Bode JC, Bode C, Brenner DA, Choudhry MA, Hamilton F, Kang YJ, Keshavarzian A, Rao R, Sartor RB, et al. Alcohol, intestinal bacterial growth, intestinal permeability to endotoxin, and medical consequences: summary of a symposium. Alcohol. 2008;42(5):349–61. Carnicella S, Kharazia V, Jeanblanc J, Janak PH, Ron D. GDNF is a fast-acting potent inhibitor of alcohol consumption and relapse. Proc Natl Acad Sci U S A. 2008;105(23):8114–9. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MA, Bender D, Maller J, Sklar P, de Bakker PI, Daly MJ, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75. Friedman N, Geiger D, Goldszmidt M. Bayesian Network Classifiers. Mach Learn. 1997;29:131–63. Reich DE, Cargill M, Bolk S, Ireland J, Sabeti PC, Richter DJ, Lavery T, Kouyoumjian R, Farhadian SF, Ward R, et al. Linkage disequilibrium in the human genome. Nature. 2001;411(6834):199–204. Carlson CS, Eberle MA, Rieder MJ, Yi Q, Kruglyak L, Nickerson DA. Selecting a maximally informative set of single-nucleotide polymorphisms for association analyses using linkage disequilibrium. Am J Hum Genet. 2004;74(1):106–20. Braga-Neto U, Hashimoto R, Dougherty ER, Nguyen DV, Carroll RJ. Is cross-validation better than resubstitution for ranking genes? Bioinformatics. 2004;20(2):253–8. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102(43):15545–50. Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28(1):27–30. We thank James Thomas and Kent Huynh for their contributions in implementing the pipeline, and Aaron Merlob for critically reviewing the manuscript. This research was partially supported by grants 5R21DA025168-02 (G. Alterovitz), 1R01HG004836-01 (G. Alterovitz), and 4R00LM009826¬03 (G. Alterovitz), and the Nazarbayev University Social Policy Grant (A. Zollanvari). Publication of this article was funded by the Nazarbayev University Social Policy Grant (A. Zollanvari). The SAGE data is available from NCBI dbGaP under accession number: phs000092.v1.p1. A.Z. provided the bioinformatics background, designed and implemented the study, and drafted the manuscript. G.A. provided the bioinformatics background, participated in the experimental design, coordination, and helped draft the manuscript; all authors approved the final version of the manuscript. The authors declare that they have no conflict of interest. This article has been published as part of BMC Systems Biology Volume 11 Supplement 3, 2017: Selected original research articles from the Third International Workshop on Computational Network Biology: Modeling, Analysis, and Control (CNB-MAC 2016): systems biology. The full contents of the supplement are available online at http://bmcsystbiol.biomedcentral.com/articles/supplements/volume-11-supplement-3. School of Engineering, Nazarbayev University, Astana, Kazakhstan Amin Zollanvari Center for Biomedical Informatics, Harvard Medical School, Boston, MA, USA Amin Zollanvari & Gil Alterovitz Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA Gil Alterovitz Correspondence to Amin Zollanvari. Supplementary Notes: The first section in this file provides the evidence for functional ties between some of the implicated genes/pathways and alcoholism. The second section in this file details the maximum likelihood estimate of the conditional mutual information. (DOCX 170 kb) Table S1. This file provides the list of SNPs with CMH test p-value < 0.0005. (XLSX 52 kb) Table S2. This file provides the complete list of interactions in the SNPxSNPxE network. (XLSX 56 kb) Zollanvari, A., Alterovitz, G. SNP by SNP by environment interaction network of alcoholism. BMC Syst Biol 11 (Suppl 3), 19 (2017). https://doi.org/10.1186/s12918-017-0403-7
CommonCrawl
arXiv.org > hep-ex > arXiv:1602.03652 High Energy Physics - Experiment Title:Measurement of double-differential muon neutrino charged-current interactions on C$_8$H$_8$ without pions in the final state using the T2K off-axis beam Authors:T2K collaboration: K. Abe, C. Andreopoulos, M. Antonova, S. Aoki, A. Ariga, S. Assylbekov, D. Autiero, M. Barbi, G.J. Barker, G. Barr, P. Bartet-Friburg, M. Batkiewicz, V. Berardi, S. Berkman, S. Bhadra, A. Blondel, S. Bolognesi, S. Bordoni, S.B. Boyd, D. Brailsford, A. Bravar, C. Bronner, M. Buizza Avanzini, R.G. Calland, S. Cao, J. Caravaca Rodríguez, S.L. Cartwright, R. Castillo, M.G. Catanesi, A. Cervera, D. Cherdack, N. Chikuma, G. Christodoulou, A. Clifton, J. Coleman, G. Collazuol, L. Cremonesi, A. Dabrowska, G. De Rosa, T. Dealtry, P.F. Denner, S.R. Dennis, C. Densham, D. Dewhurst, F. Di Lodovico, S. Di Luise, S. Dolan, O. Drapier, K.E. Duffy, J. Dumarchez, S. Dytman, M. Dziewiecki, S. Emery-Schrenk, A. Ereditato, T. Feusels, A.J. Finch, G.A. Fiorentini, M. Friend, Y. Fujii, D. Fukuda, Y. Fukuda, A.P. Furmanski, V. Galymov, A. Garcia, S.G. Giffin, C. Giganti, F. Gizzarelli, M. Gonin, N. Grant, D.R. Hadley, L. Haegel, M.D. Haigh, P. Hamilton, D. Hansen, T. Hara, M. Hartz, T. Hasegawa, N.C. Hastings, T. Hayashino, Y. Hayato, R.L. Helmer, M. Hierholzer, A. Hillairet, A. Himmel, T. Hiraki, S. Hirota, M. Hogan, J. Holeczek, S. Horikawa, F. Hosomi, K. Huang, A.K. Ichikawa, K. Ieki, M. Ikeda, J. Imber, J. Insler, R.A. Intonti, T.J. Irvine, T. Ishida , T. Ishii, E. Iwai, K. Iwamoto, A. Izmaylov, A. Jacob, B. Jamieson, M. Jiang, S. Johnson, J.H. Jo, P. Jonsson, C.K. Jung, M. Kabirnezhad, A.C. Kaboth, T. Kajita, H. Kakuno, J. Kameda, D. Karlen, I. Karpikov, T. Katori, E. Kearns, M. Khabibullin, A. Khotjantsev, D. Kielczewska, T. Kikawa, H. Kim, J. Kim, S. King, J. Kisiel, A. Knight, A. Knox, T. Kobayashi, L. Koch, T. Koga, A. Konaka, K. Kondo, A. Kopylov, L.L. Kormos, A. Korzenev, Y. Koshio, W. Kropp, Y. Kudenko, R. Kurjata, T. Kutter, J. Lagoda, I. Lamont, E. Larkin, P. Lasorak, M. Laveder, M. Lawe, M. Lazos, T. Lindner, Z.J. Liptak, R.P. Litchfield, X. Li, A. Longhin, J.P. Lopez, L. Ludovici, X. Lu, L. Magaletti, K. Mahn, M. Malek, S. Manly, A.D. Marino, J. Marteau, J.F. Martin, P. Martins, S. Martynenko, T. Maruyama, V. Matveev, K. Mavrokoridis, W.Y. Ma, E. Mazzucato, M. McCarthy, N. McCauley, K.S. McFarland, C. McGrew, A. Mefodiev, M. Mezzetto, P. Mijakowski, A. Minamino, O. Mineev, S. Mine, A. Missert, M. Miura, S. Moriyama, Th.A. Mueller, S. Murphy, J. Myslik, T. Nakadaira, M. Nakahata, K.G. Nakamura, K. Nakamura, K.D. Nakamura, S. Nakayama, T. Nakaya, K. Nakayoshi, C. Nantais, C. Nielsen, M. Nirkko, K. Nishikawa, Y. Nishimura, J. Nowak, H.M. O'Keeffe, R. Ohta, K. Okumura, T. Okusawa, W. Oryszczak, S.M. Oser, T. Ovsyannikova, R.A. Owen, Y. Oyama, V. Palladino, J.L. Palomino, V. Paolone, N.D. Patel, M. Pavin, D. Payne, J.D. Perkin, Y. Petrov, L. Pickard, L. Pickering, E.S. Pinzon Guerra, C. Pistillo, B. Popov, M. Posiadala-Zezula, J.-M. Poutissou, R. Poutissou, P. Przewlocki, B. Quilain, E. Radicioni, P.N. Ratoff, M. Ravonel, M.A.M. Rayner, A. Redij, E. Reinherz-Aronis, C. Riccio, P. Rojas, E. Rondio, S. Roth, A. Rubbia, A. Rychter, R. Sacco, K. Sakashita, F. Sánchez, F. Sato, E. Scantamburlo, K. Scholberg, S. Schoppmann, J. Schwehr, M. Scott, Y. Seiya, T. Sekiguchi, H. Sekiya, D. Sgalaberna, R. Shah, A. Shaikhiev, F. Shaker, D. Shaw, M. Shiozawa, T. Shirahige, S. Short, M. Smy, J.T. Sobczyk, M. Sorel, L. Southwell, P. Stamoulis, J. Steinmann, T. Stewart, Y. Suda, S. Suvorov, A. Suzuki, K. Suzuki, S.Y. Suzuki, Y. Suzuki, R. Tacik, M. Tada, S. Takahashi, A. Takeda, Y. Takeuchi, H.K. Tanaka, H.A. Tanaka, D. Terhorst, R. Terri, T. Thakore, L.F. Thompson, S. Tobayama, W. Toki, T. Tomura, C. Touramanis, T. Tsukamoto, M. Tzanov, Y. Uchida, A. Vacheret, M. Vagins, Z. Vallari, G. Vasseur, T. Wachala, K. Wakamatsu, C.W. Walter, D. Wark, W. Warzycha, M.O. Wascko, A. Weber, R. Wendell, R.J. Wilkes, M.J. Wilking, C. Wilkinson, J.R. Wilson, R.J. Wilson, Y. Yamada, K. Yamamoto, M. Yamamoto, C. Yanagisawa, T. Yano, S. Yen, N. Yershov, M. Yokoyama, K. Yoshida, T. Yuan, M. Yu, A. Zalewska, J. Zalipska, L. Zambelli, K. Zaremba, M. Ziembicki, E.D. Zimmerman, M. Zito, J. Żmuda et al. (228 additional authors not shown) (Submitted on 11 Feb 2016 (v1), last revised 18 Feb 2016 (this version, v2)) Abstract: We report the measurement of muon neutrino charged-current interactions on carbon without pions in the final state at the T2K beam energy using 5.734$\times10^{20}$ protons on target. For the first time the measurement is reported as a flux-integrated, double-differential cross-section in muon kinematic variables ($\cos\theta_\mu$, $p_\mu$), without correcting for events where a pion is produced and then absorbed by final state interactions. Two analyses are performed with different selections, background evaluations and cross-section extraction methods to demonstrate the robustness of the results against biases due to model-dependent assumptions. The measurements compare favorably with recent models which include nucleon-nucleon correlations but, given the present precision, the measurement does not solve the degeneracy between different models. The data also agree with Monte Carlo simulations which use effective parameters that are tuned to external data to describe the nuclear effects. The total cross-section in the full phase space is $\sigma = (0.417 \pm 0.047 \text{(syst)} \pm 0.005 \text{(stat)})\times 10^{-38} \text{cm}^2$ $\text{nucleon}^{-1}$ and the cross-section integrated in the region of phase space with largest efficiency and best signal-over-background ratio ($\cos\theta_\mu>0.6$ and $p_\mu > 200$ MeV) is $\sigma = (0.202 \pm 0.0359 \text{(syst)} \pm 0.0026 \text{(stat)}) \times 10^{-38} \text{cm}^2$ $\text{nucleon}^{-1}$. Comments: 44 pages, 17 figures. Modifications from previous version: references fixed and style of Fig.11 improved for black and white printing Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: Phys. Rev. D 93, 112012 (2016) DOI: 10.1103/PhysRevD.93.112012 Cite as: arXiv:1602.03652 [hep-ex] (or arXiv:1602.03652v2 [hep-ex] for this version) From: Sara Bolognesi [view email] [v1] Thu, 11 Feb 2016 09:37:57 UTC (1,069 KB) [v2] Thu, 18 Feb 2016 09:56:05 UTC (981 KB) hep-ex
CommonCrawl
\begin{document} \title{Distribution of time-bin entangled qubits over 50\,km of optical fiber} \author{I. Marcikic, H. de Riedmatten, W. Tittel, H. Zbinden, M. Legr\'{e} and N. Gisin} \affiliation{Group of Applied Physics-Optique, University of Geneva, CH-1211, Geneva 4, Switzerland} \begin{abstract} We report experimental distribution of time-bin entangled qubits over 50\,km of optical fibers. Using actively stabilized preparation and measurement devices we demonstrate violation of the CHSH Bell inequality by more than 15 standard deviations without removing the detector noise. In addition we report a proof of principle experiment of quantum key distribution over 50\,km of optical fibers using entangled photon. \end{abstract} \maketitle In the science of quantum information a central experimental issue is how to distribute entangled states over large distances. Indeed, most protocols in quantum communication require the different parties to share entanglement. The best-known examples are Quantum Teleportation \cite{brassard93} and Ekert's Quantum Key Distribution (QKD) protocol \cite{ekert}. Note that even in protocols that do not explicitly require entanglement, like the BB84 QKD protocol \cite{bb84}, security proofs are often based on "virtual entanglement", i.e. on the fact that an ideal single photon source is indistinguishable from an entangled photon pair source in which one photon is used as a trigger \cite{ShorPreskill00}. From a more practical point of view, entanglement over significant distances can be used to increase the maximal distance a quantum state can cover, as in quantum repeater \cite{briegel98} and quantum relay \cite{relay} protocols. Finally, entanglement is also treated as a resource in the study of communication complexity \cite{Brassard03}. As entanglement cannot be created by shared randomness and local operations, it must be somehow distributed. Recently there have been some proposals to use satellites for long distance transmission \cite{aspelmeyer03}. Also some experiments through open space have been performed either for QKD (over 50\,m) \cite{beveratos02} or for the transmission of entangled qubits (over 600\,m) \cite{aspelmeyer031}. Despite the weather and daylight problems, this is an interesting approach. Another possibility, that we follow in this work, is to use the worldwide implemented optical fiber network. This, however, implies some constraints. One should operate at telecommunication wavelengths (1.3 or 1.55\,$\mu$m), in order to minimize losses in optical fibers, and the encoding of the qubits must be robust against decoherence in optical fibers. Likely the most adequate way to encode qubits is to use energy-time \cite{franson89} or it's discrete version time-bin encoding \cite{jurgen99}. The major drawback of this kind of encoding, compared to polarization type, is that the creation and the measurement is more complex: it relies on stable interferometers. In this letter we report a way to create and to measure time-bin entangled qubits which allows us to violate Bell inequalities over 50\,km of optical fibers and to show a proof of principle for entanglement based QKD over long ranges. Moreover it allows to demonstrate stability of our entire set-up over several hours. Let us first remind the reader how to create and measure time-bin entangled qubits. They are created by sending a short laser pulse first through an unbalanced interferometer (denoted as the pump interferometer) and then through a non-linear crystal where eventually a pair of photons is created by spontaneous parametric down conversion (SPDC)(see Fig.\ref{setup}). The state can be written: \begin{equation} \left| \Psi \right\rangle =\frac{1}{\sqrt{2}}(\left| 0\right\rangle _{A}\left| 0\right\rangle _{B}-e^{i\varphi}\left| 1\right\rangle _{A}\left|1\right\rangle _{B}) \label{2} \end{equation} where $\left|0\right \rangle$ represents a photon in the first time bin (having passed through the short arm) and $\left|1\right \rangle$ a photon in the second time-bin (having passed through the long arm). The index $A$ and $B$ represents Alice's and Bob's photon. The phase $\varphi$ is defined with respect to a reference path length difference between the short and the long arm $\Delta \tau$. \begin{figure} \caption{{Scheme of the experimental set-up. Time bin qubits are prepared by passing a fs pulse through the pump interferometer and a non-linear crystal (NLC). Eventually, a pair of entangled photons is created in the crystal. They are sent to Alice and Bob through 25.3\,km of optical fibers. Alice and Bob analyze photons using interferometers equally unbalanced with respect to the pump interferometer. All three interferometers are built using passive 50-50 beam-splitters (BS). Alice's and Bob's detection times are also represented. }} \label{setup} \end{figure} The photons A and B are then sent to Alice and Bob who perform projective measurements, by using a similar unbalanced interferometer. There are three detection times on Alice's (Bob's) detectors with the respect to the emission time of the pump laser (see Fig.\ref{setup}). The first and the last peak (denoted as satellite peaks) corresponds to events which are temporally distinguishable: the left (right) peak corresponds to a photon created in the first (second) time-bin which passed through the short (long) arm of Alice's interferometer. When detected in the left (right) satellite peak, the photon is projected onto the vector $\left|0\right \rangle$ ($\left|1\right \rangle$) (the poles on the Poincar\'e qubit sphere). Photons detected in the central peak can be either due to events where the created photon is in the first time-bin and then it passes through the long arm of Alice's interferometer or due to events where the photon is created in the second time-bin and then passes through the short arm of Alice's interferometer. In this case the photon is projected onto the vector $\left|0\right\rangle +e^{i\alpha}\left|1\right\rangle$ (i.e. on the equator of the Poincar\'e qubit sphere). Note that when Alice records the central peak she does not observe single photon interference by changing the phase of her interferometer because which-path information can be found by recording the emission time of Bob's photon. With reference to experiments using polarization entangled photons, we refer to this as rotational invariance \cite{clauser74}. If Alice and Bob both record counts in their central peaks, they observe second order interference by changing either the phase in Alice's, in Bob's or in the pump interferometer. The coincidence count rate between Alice's and Bob's detectors $A_iB_j$, is then given by: \begin{equation} R_{A_i,B_j}(\alpha,\beta,\varphi)\sim 1+ijVcos(\alpha+\beta-\varphi) \label{3} \end{equation} where $i$ and $j=\pm 1$ (see Fig.\ref{setup}) and V is visibility of the interference fringes (which can in principle reach the value of 1). We define the imbalance of the pump interferometer as the reference time difference $\Delta\tau$ between the first and the second time-bin, the phase $\varphi$ is thus taken to be zero. The correlation coefficient is defined as: \begin{equation} E(\alpha,\beta)=\frac{\displaystyle \sum_{i,j} ijR_{A_iB_j}(\alpha,\beta)}{\displaystyle \sum_{i,j} R_{A_iB_j}(\alpha,\beta)} \label{4} \end{equation} and by inserting Eq.\ref{3} into Eq.\ref{4} the correlation coefficient becomes: \begin{equation} E(\alpha,\beta)=Vcos(\alpha+\beta) \label{5} \end{equation} The Bell inequalities define an upper bound for correlations that can be described by local hidden variable theories (LHVT). One of the most frequently used forms, known as the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality \cite{clauser}, is: \begin{equation} S=\vert E(\alpha,\beta)+E(\alpha,\beta')+E(\alpha',\beta)-E(\alpha',\beta') \vert \leq 2 \label{6} \end{equation} Quantum mechanics predicts that $S$ has a maximum value of $S=2\sqrt{2}$ with $\alpha= 0^{\circ}$,$\alpha'= 90^{\circ}$, $\beta=45^{\circ}$ and $\beta'=-45^{\circ}$. It has been also shown that when the correlation function has sinusoidal form of Eq.\ref{5} and when there is rotational invariance, the boundary condition of Eq.\ref{6} can be written as: \begin{equation} S=2\sqrt{2}V \leq 2\label{7} \end{equation} thus $V \geq \frac{1}{\sqrt{2}}$ implies violation of the CHSH Bell inequality, i.e. correlations can not be explained by LHVT. Our experimental set-up is the following (see Fig.\ref{setup}): A 150\,femtosecond laser pulse with a 710\,nm wavelength and with a repetition rate of 75\,MHz is first sent through an unbalanced, bulk, Michelson interferometer with an optical path difference of $\Delta\tau=1.2$\,ns and then through a type I LBO (lithium triborate) non-linear crystal where collinear non-degenerate photon pairs at 1.3 and 1.55\,$\mu$m wavelength can be created by SPDC. The pump beam is then removed with a silicon filter and the pairs are coupled into an optical fiber. The photons are separated with a wavelength-division-multiplexer, the 1.3\,$\mu$m photon is sent through 25.3\,km of standard optical fiber (SOF) to Alice and the 1.55\,$\mu$m photon through 25.3\,km of dispersion shifted fiber (DSF) to Bob \cite{explication1}. Alice's photon is then measured with a fiber Michelson interferometer and detected by one of two liquid nitrogen cooled passively quenched Germanium avalanche photo-diodes (APD) $A_{+1}$ or $A_{\text{\small{-}}1}$. Their quantum efficiency is of around 10\,\% with 20\,kHz of dark counts. In order to select only the central peak events and also to reduce the detector dark counts, a coincidence is made with the emission time of the laser pulse. This signal then triggers Bob's detectors ($B_{+1}$ and $B_{\text{\small{-}}1}$) which are two InGaAs APDs (IdQuantique) working in so called gated mode. Although both detectors have similar quantum efficiencies of 20\,\%, one of the detectors ($B_{+1}$) dark count probability is two times smaller than the other one ($B_{\text{\small{-}}1}$), and is around $10^{-4}$\,/ns. To reduce chromatic dispersion in optical fibres and the detection of multiple pairs \cite{marcikic02}, we use interference filters with spectral width of 10\,nm for 1.3\,$\mu$m photons and 18\,nm for the 1.55\,$\mu$m photons. Using 70\,mW of average input power (measured after the pump interferometer) the probability of creating an entangled qubit per pulse is around 8\,\%. Bob's analyzer is also a Michelson type interferometers built with optical fibers. To better control the phase and to achieve long term stability all three interferometers are passively and actively stabilized. Passive stabilization consists of controlling the temperature of each interferometer. Active stabilization consists of probing the interferometer's phase with a frequency stabilized laser at 1.534\,$\mu$m (Dicos), and to lock them to a desired value via a feedback loop on a piezo actuator (PZA) included in each interferometer. In order to change path difference in the bulk pump-interferometer, one of the mirrors is mounted on a translation stage including a PZA with the range of around 4\,$\mu$m. In the analyzing interferometers the long fiber path is wound around a cylindric PZA with a circumference variation range of 60\,$\mu$m. Contrary to the bulk interferometer which is continuously stabilized, the phase of the fiber interferometers can not be stabilized during the measurement period. Thus we continuously alternate between measurement periods of 100\,seconds and stabilization periods of 5\,seconds. This method allows us not only to stabilize the entire set-up during several hours, but also to have good control over the changes of both phases $\alpha$ and $\beta$. In order to show a violation of the CHSH Bell inequality after 50\,km of optical fibers, we proceed in two steps: first we scan Bob's phase $\beta$ while Alice's phase $\alpha$ is kept constant. We obtain a raw visibility of around $78\pm1.6$\,\% (see Fig.\ref{corr}) from which we can infer an $S$ parameter of $S=2.206\pm0.045$ (Eq.\ref{7}) leading to a violation of the CHSH Bell inequality by more than 4 standard deviations. The coincidence count rate between any combination of detectors $A_iB_j$ is of around 3\,Hz. \begin{figure}\label{corr} \end{figure} The raw visibility of the correlation function is mainly reduced due to the creation of multiple pairs (around 9\,\%), due to accidental coincidence counts (related to dark counts of our detectors, around 8\,\%) and due to the misalignment of the interferometers (around 5\,\%). In principle one could reduce the creation of multiple pairs by reducing the input power, but then the coincidence count rate would also decrease. With our new interferometers we are able to perform for the first time with time-bins the second step: measure the CHSH Bell inequality according to Eq.\ref{6}, i.e. lock the phase to the desired value in order to measure the four different correlation coefficients one after the other. To reduce statistical fluctuations, we measure the correlation coefficient (Eq.\ref{4}) during almost an hour for each setting. The obtained $S$ parameter is $S=2.185\pm0.006$ which shows a violation of the CHSH Bell inequality by more than 15 standard deviations (see Fig.\ref{fset}). \begin{figure}\label{fset} \end{figure} It has been proven that when the Bell inequality is violated the entangled photons can be used in quantum cryptography \cite{rmp}. Our QKD protocol is analogous to the BB84 protocol using time-bin entangled photons \cite{tittel00}. Hence, Alice and Bob use two maximally conjugated measurement basis. The first basis is defined by two orthogonal vectors $\left| 0 \right \rangle$ and $\left|1 \right \rangle$ represented on the poles of the Poincar\'{e} qubit sphere (Fig.\ref{setup}). The projection onto this basis is performed whenever a photon is detected in a satellite peak. Let us illustrate how Alice and Bob encode their bits: whenever Alice detects her photon in the first (second) satellite peak she knows that the pair is created in the first (second) time-bin and thus Bob can either detect the twin photon in the first (second) satellite peak or in the central peak, however he can never detect it in the second (first) satellite peak. Thus, after suppressing central peak events with the basis reconciliation, Alice and Bob encode their bits as 0 (1) if the photon is detected in the first (second) satellite peak. The second basis is defined by two orthogonal vectors represented on the equator of the Poincar\'{e} sphere (for example $\frac{\left| 0 \right \rangle+\left| 1 \right \rangle}{\sqrt{2}}$ and $\frac{\left| 0 \right \rangle-\left| 1 \right \rangle}{\sqrt{2}}$). The projection onto this basis is performed when a photon is detected in the central peak. Alice and Bob have to correctly adjust their interferometers such that they have perfect correlation between detectors $A_{+1}B_{+1}$ and $A_{\text{\small{-}}1}B_{\text{\small{-}}1}$. The encoding of bits 0 and 1 in this basis is thus defined by which detector fires. As Alice's and Bob's photon passively choose their respective measurement basis, there is 50\,\% probability that they are detected in the same basis which ensures the security against photon number splitting attack \cite{rmp}. We report a proof of principle of entanglement based QKD over 50\,km of optical fiber. In our experimental set-up, Alice sequentially selects one of the three detection windows by looking at the arrival time of her photon with respect to the emission of the laser pulse (see Fig.\ref{setup}). This signal is then used to trigger Bob's detectors. In the first measurement basis the measured quantum bit error rate (QBER) \cite{explication2} is of $12.8\pm0.1$\,\% and the measured raw bit rate of around 5\,Hz. The QBER is due to accidental coincidence counts (around 8\,\%) and to creation of multiple pairs (around 4.5\,\%, see Fig.\ref{crypto}a)). In the second measurement basis the measured QBER is of $10.5\pm0.09$\,\% (Fig.\ref{crypto}b)), with a bit rate of 6\,Hz. In this case the QBER is due to accidental coincidence count probability (around 4\,\%), to creation of multiple pairs (around 4.5\,\%) and to slight misalignment of our interferometers (around 2\,\%). In order to have a low statistical error the integration time for both basis is of around six hours. The difference of the QBER measured in two basis is due to the fact that in the first measurement basis the detectors are opened during two time-windows instead of one in the second basis. However in the first basis the misalignment of interferometers does not introduce any error. Note that by using two InGaAs APDs with the same low dark count probability as detector $B_{+1}$, the QBER in the first measurement basis would be reduced to 10.8\,\% and in the second basis to 9.8\,\%. \begin{figure}\label{crypto} \end{figure} For a true implementation of QKD using time-bin entangled photons it is necessary that Alice and Bob can monitor detections in all three time windows at the same time and not as presented here, one after the other. In addition, as Alice has to trigger Bob's detectors, it is important to ensure that Eve does not get any information about Alice's detection times. This extensions would require more coincidence electronics but can be easily implemented. Finally, note that Alice's trigger signal has to arrive at Bob's before the photon, thereby putting constraints on the distance between Alice, Bob and the source of entangled photons. These limitations are suppressed by using passively quenched InGaAs APDs (work in progress) or detectors based on superconductivity \cite{sobolewski03}. In this letter we present an experimental distribution of time-bin entangled photons over 50\,km of optical fiber. Using active phase stabilization with a frequency stabilized laser and feedback loop, long term stability and control of the interferometer's phase is achieved. In the first experiment, the CHSH Bell inequality is violated by more than 15 standard deviation without removing the detector noise. The possibility of changing the phase in a controlled way allowed us also to show a proof of principle of entanglement based quantum key distribution over 50\,km of optical fiber. An average Quantum Bit Error Rate of 11.5\,\% is demonstrated which is small enough to establish quantum keys secure against individual attacks \cite{Fuchs97}. Finally, a long term set-up stability opens the road for future demonstrations of more complicated quantum communication protocols requiring long measurement times as is the case for the entanglement swapping protocol. The authors would like to thank Claudio Barreiro and Jean-Daniel Gautier for technical support. Financial support by the Swiss NCCR Quantum Photonics, and by the European project RamboQ are acknowledged. \end{document}
arXiv
John draws a regular five pointed star in the sand, and at each of the 5 outward-pointing points and 5 inward-pointing points he places one of ten different sea shells. How many ways can he place the shells, if reflections and rotations of an arrangement are considered equivalent? There are $10!$ ways to put the shells in the sand, not considering rotations and reflections. Arrangements can be reflected or not reflected and can be rotated by 0, 1/5, 2/5, 3/5, or 4/5, so they come in groups of ten equivalent arrangements. Correcting for the symmetries, we find that there are $10!/10=\boxed{362880}$ distinct arrangements.
Math Dataset